1. 程式人生 > >Linux下使用libevent庫實現伺服器端程式設計

Linux下使用libevent庫實現伺服器端程式設計

一、背景

TCP伺服器端一般用到非阻塞Socket、IO複用,Linux可以支援epoll、select,而Windows支援select、IOCP,考慮平臺適用性,需要對IO事件進行封裝相容;

二、相關知識

2.1 事件驅動(I/O複用)

服務端常用到的 select、poll、epoll都屬於I/O複用模型,常用於單執行緒上處理多個網路連線; 引用Stevens UNP書中的圖,I/O複用表示把應用程式中所有連線事件都交給核心監控,當某個連接出現可寫、可讀事件後,核心通知應用去處理;
另外的一種模型是多執行緒(程序)處理,每個執行緒(程序)處理一個連線請求; 相比較下[2]:多執行緒的處理邏輯簡單、開銷大容易達到系統限制;I/O複用邏輯處理要求高、但可以處理大量網路連線;

2.2 Libevent庫介紹

libevent是一個輕量級的高效能的網路庫,具備瞭如下特性[1]:

  1. 事件驅動,高效能;
  2. 輕量級,專注於網路(相對於ACE);
  3. 開放原始碼,程式碼相當精煉、易讀;
  4. 跨平臺,支援Windows、Linux、BSD和Mac OS;
  5. 支援多種I/O多路複用技術(epoll、poll、dev/poll、select和kqueue等),在不同的作業系統下,做了多路複用模型的抽象,可以選擇使用不同的模型,通過事件函式提供服務;
  6. 支援I/O,定時器和訊號等事件;
  7. 採用Reactor模式;

2.2 Libevent常用函式

在Libevent中,訊號處理、Socket處理、定時器處理都被視作事件,由內部的dispatcher進行統一處理; 首先是先介紹 libevent 例項的建立 event_base_new、迴圈分發 event_base_dispatch、退出分發 event_base_loopbreak、釋放函式 event_base_free (event.h):
/**                                                                                                                           
 * Create and return a new event_base to use with the rest of Libevent.                                                       
 *                                                                                                                            
 * @return a new event_base on success, or NULL on failure.                                                                   
 *                                                                                                                            
 * @see event_base_free(), event_base_new_with_config()                                                                       
 */                                                                                                                           
struct event_base *event_base_new(void);

/**
   Event dispatching loop                                                                                                     

  This loop will run the event base until either there are no more added                                                      
  events, or until something calls event_base_loopbreak() or                                                                  
  event_base_loopexit().                                                                                                      

  @param base the event_base structure returned by event_base_new() or                                                        
     event_base_new_with_config()
  @return 0 if successful, -1 if an error occurred, or 1 if no events were                                                    
    registered.
  @see event_base_loop()                                                                                                      
 */ 
int event_base_dispatch(struct event_base *);

/**
  Deallocate all memory associated with an event_base, and free the base.

  Note that this function will not close any fds or free any memory passed
  to event_new as the argument to callback.

  @param eb an event_base to be freed
 */
void event_base_free(struct event_base *);

/**                                                                                                                           
  Abort the active event_base_loop() immediately.                                                                             
                                                                                                                              
  event_base_loop() will abort the loop after the next event is completed;                                                    
  event_base_loopbreak() is typically invoked from this event's callback.                                                     
  This behavior is analogous to the "break;" statement.                                                                       
                                                                                                                              
  Subsequent invocations of event_loop() will proceed normally.                                                               
                                                                                                                              
  @param eb the event_base structure returned by event_init()                                                                 
  @return 0 if successful, or -1 if an error occurred                                                                         
  @see event_base_loopexit()                                                                                                  
 */                                                                                                                           
int event_base_loopbreak(struct event_base *);
然後是事件的建立 event_new、釋放 event_free、加入到監控 event_add、從監控中移除 event_del
/**
  Get the kernel event notification mechanism used by Libevent.

  The EV_PERSIST flag can also be passed in the events argument: it makes                                                     
  event_add() persistent until event_del() is called.                                                                         

  The EV_ET flag is compatible with EV_READ and EV_WRITE, and supported                                                       
  only by certain backends.  It tells Libevent to use edge-triggered                                                          
  events.                                                                                                                     

  The EV_TIMEOUT flag has no effect here.                                                                                     

  It is okay to have multiple events all listening on the same fds; but                                                       
  they must either all be edge-triggered, or all not be edge triggerd.                                                        

  When the event becomes active, the event loop will run the provided
  callbuck function, with three arguments.  The first will be the provided                                                    
  fd value.  The second will be a bitfield of the events that triggered:                                                      
  EV_READ, EV_WRITE, or EV_SIGNAL.  Here the EV_TIMEOUT flag indicates                                                        
  that a timeout occurred, and EV_ET indicates that an edge-triggered
  event occurred.  The third event will be the callback_arg pointer that                                                      
  you provide.                                                                                                                

  @param base the event base to which the event should be attached.                                                           
  @param fd the file descriptor or signal to be monitored, or -1.
  @param events desired events to monitor: bitfield of EV_READ, EV_WRITE,                                                     
      EV_SIGNAL, EV_PERSIST, EV_ET.
  @param callback callback function to be invoked when the event occurs                                                       
  @param callback_arg an argument to be passed to the callback function                                                       

  @return a newly allocated struct event that must later be freed with                                                        
    event_free().
  @see event_free(), event_add(), event_del(), event_assign()                                                                 
 */
struct event *event_new(struct event_base *, evutil_socket_t, short, event_callback_fn, void *); 

/**
   Deallocate a struct event * returned by event_new().

   If the event is pending or active, first make it non-pending and
   non-active.
 */
void event_free(struct event *);

/**
  Add an event to the set of pending events.

  The function event_add() schedules the execution of the ev event when the
  event specified in event_assign()/event_new() occurs, or when the time
  specified in timeout has elapesed.  If atimeout is NULL, no timeout
  occurs and the function will only be
  called if a matching event occurs.  The event in the
  ev argument must be already initialized by event_assign() or event_new()
  and may not be used
  in calls to event_assign() until it is no longer pending.

  If the event in the ev argument already has a scheduled timeout, calling
  event_add() replaces the old timeout with the new one, or clears the old
  timeout if the timeout argument is NULL.

  @param ev an event struct initialized via event_set()
  @param timeout the maximum amount of time to wait for the event, or NULL
         to wait forever
  @return 0 if successful, or -1 if an error occurred
  @see event_del(), event_assign(), event_new()
  */
int event_add(struct event *ev, const struct timeval *timeout);

/**
  Remove an event from the set of monitored events.

  The function event_del() will cancel the event in the argument ev.  If the
  event has already executed or has never been added the call will have no
  effect.

  @param ev an event struct to be removed from the working set
  @return 0 if successful, or -1 if an error occurred
  @see event_add()
 */
int event_del(struct event *);
最後再介紹下幾個 libevent的緩衝管理的介面evbuffer,服務端的緩衝處理一直都是非常重要的; evbuffer實現了一個動態調整的連續記憶體空間,用於維護應用資料與網路裝置之間的讀入、寫出操作; evbuffer的建立與銷燬函式(buffer.h):
/**
  Allocate storage for a new evbuffer.                                                                                        

  @return a pointer to a newly allocated evbuffer struct, or NULL if an error                                                 
    occurred                                                                                                                  
 */
struct evbuffer *evbuffer_new(void);                                                                                          
/**
  Deallocate storage for an evbuffer.                                                                                         

  @param buf pointer to the evbuffer to be freed                                                                              
 */
void evbuffer_free(struct evbuffer *buf);

evbuffer支援socket讀寫,如從網路中讀取資料evbuffer_read、解析處理資料後進行移除evbuffer_remove
/**
  Read from a file descriptor and store the result in an evbuffer.

  @param buffer the evbuffer to store the result
  @param fd the file descriptor to read from
  @param howmuch the number of bytes to be read
  @return the number of bytes read, or -1 if an error occurred
  @see evbuffer_write()
 */
int evbuffer_read(struct evbuffer *buffer, evutil_socket_t fd, int howmuch);

/**
  Read data from an evbuffer and drain the bytes read.

  If more bytes are requested than are available in the evbuffer, we
  only extract as many bytes as were available.

  @param buf the evbuffer to be read from
  @param data the destination buffer to store the result
  @param datlen the maximum size of the destination buffer
  @return the number of bytes read, or -1 if we can't drain the buffer.
 */
int evbuffer_remove(struct evbuffer *buf, void *data, size_t datlen);

寫入資料 evbuffer_add、並通過網路傳送出去 evbuffer_write,類似的還是有 evbuffer_add_file、evbuffer_add_printf
/**
  Append data to the end of an evbuffer.

  @param buf the evbuffer to be appended to
  @param data pointer to the beginning of the data buffer
  @param datlen the number of bytes to be copied from the data buffer
  @return 0 on success, -1 on failure.
 */
int evbuffer_add(struct evbuffer *buf, const void *data, size_t datlen);

/**
  Write the contents of an evbuffer to a file descriptor.

  The evbuffer will be drained after the bytes have been successfully written.

  @param buffer the evbuffer to be written and drained
  @param fd the file descriptor to be written to
  @return the number of bytes written, or -1 if an error occurred
  @see evbuffer_read()
 */
int evbuffer_write(struct evbuffer *buffer, evutil_socket_t fd);

三、服務端程式設計例項

需求:編寫一個程式對指定地址、埠開啟TCP服務,並無限接收客戶端流量(無需解析)完成吞吐量的測試;
主函式的步驟就是獲取輸入、建立libevent例項、註冊開啟監聽服務、註冊訊號事件、進入分發迴圈;
int main(int argc, char *argv[])
{
	int opt = 0;
	instance_t inst = {0};

	if ( argc < 2 ) {
		usage(argv[0]);
		goto _E1;
	}

	/* Initalize config */
	inst.sfd = -1;
	inst.enable = 1;
	inst.cfg.tcp_port = 5001;
	inst.cfg.tcp_addr = strdup("127.0.0.1");

	while ( (opt = getopt(argc, argv, "s:p:")) != FAILURE ) {
		switch ( opt ) {
			case 's':
				FREE_POINTER(inst.cfg.tcp_addr);
				inst.cfg.tcp_addr = strdup(optarg);
				break;

			case 'p':
				inst.cfg.tcp_port = (u16)atoi(optarg);
				break;

			default:
				usage(argv[0]);
				goto _E1;
		}
	}

	/* Initalize the event library */
	inst.base = event_base_new();

	/* Initalize events */
	inst.ev_signal = evsignal_new(inst.base, SIGTERM, __on_signal, &inst);
	assert(inst.ev_signal);

	event_add(inst.ev_signal, NULL);

	__do_listen(&inst);

	event_base_dispatch(inst.base);
	evconnlistener_free(inst.ev_listener);
	event_free(inst.ev_signal);
	event_base_free(inst.base);
_E1:
	evutil_closesocket(inst.sfd);
	FREE_POINTER(inst.cfg.tcp_addr);
	return EXIT_SUCCESS;
}
在訊號處理函式方面,被例監控了 SIGTERM函式,即收到中斷訊號退出分發迴圈:
static void __on_signal(evutil_socket_t fd, short event, void *arg)
{
    instance_t *pinst = (instance_t *)arg;
    event_base_loopbreak(pinst->base);
    printf("SIGTERM Breakout...\n");
}
啟用監聽時,使用了libevent 的utils、listener工具集,主要是 evconnlistener_new_bind 函式 大概看了一下 evconnlistener_new_bind 內部的實現,就是在根據地址埠進行監聽、繫結,epoll上對 listen socket進行可讀的事件監控, 有新連線到來時觸發 listen socket的可讀,然後迴圈進行 accept 操作(socket是非阻塞);
static void __on_accept(struct evconnlistener *ev_listener, evutil_socket_t cfd,
			    struct sockaddr *paddr, int socklen, void *args)
{
	struct sockaddr_in *pcaddr = (struct sockaddr_in *)paddr;
	__do_alloc((instance_t *)args, cfd);
	printf("[L] Setup new connection %s:%hu, fd: %d\n",
			inet_ntoa(pcaddr->sin_addr), ntohs(pcaddr->sin_port), cfd);
}

static void __do_listen(instance_t *pinst)
{
	struct sockaddr_in saddr = {0};
	saddr.sin_family = AF_INET;
	saddr.sin_port = htons(pinst->cfg.tcp_port);
	inet_aton(pinst->cfg.tcp_addr, &saddr.sin_addr);

	pinst->ev_listener = evconnlistener_new_bind(pinst->base, __on_accept, pinst,
			LEV_OPT_REUSEABLE | LEV_OPT_CLOSE_ON_FREE, -1, (struct sockaddr *)&saddr, sizeof(saddr));
	assert(pinst->ev_listener);
}
補充一下結構體說明,instance_t為整個程式的例項結構、item_t為每個連線的上下文結構
typedef struct instance
{
    u8 enable;
    evutil_socket_t sfd;

    struct config {
        u16 tcp_port;
        char *tcp_addr;
    } cfg;

    struct event *ev_signal;
    struct evconnlistener *ev_listener;
    struct event_base *base;
} instance_t;

typedef struct item
{
    instance_t *pinst;

    struct event *ev_client;
    struct evbuffer *evbuf_req; /* request */
    struct evbuffer *evbuf_rsp; /* response */

    u64 total_read;
    u64 total_send;
} item_t;

繼續剛才的思路,新建連線後會呼叫__on_accept回撥函式,內部又呼叫了 __do_alloc 函式, 在這個函式中需要初始化連線的 item_t 結構、建立evbuffer、並將新連線可讀事件進行監控;
static void __do_alloc(instance_t *pinst, evutil_socket_t fd)
{
    item_t *pitem = (item_t *)calloc(1, sizeof(item_t));
    assert(pitem);

    pitem->pinst = pinst;
    pitem->evbuf_req = evbuffer_new();
    pitem->evbuf_rsp = evbuffer_new();

    assert(pitem->evbuf_req);
    assert(pitem->evbuf_rsp);

    pitem->ev_client = event_new(pinst->base, fd, EV_READ | EV_PERSIST,
            __on_request, pitem);
    event_add(pitem->ev_client, NULL);

    assert(SUCCESS == evutil_make_socket_nonblocking(fd));
}

這樣,客戶端傳送資料時,這邊將觸發socket可讀事件,呼叫函式 __on_request 在__on_request 中,為了簡單起見,跳過了資料解析處理的流程,直接緩衝區超過10M後進行清空處理;
static void __on_request(evutil_socket_t fd, short event, void *arg)
{
    item_t *pitem = (item_t *)arg;
    int read_byte = evbuffer_read(pitem->evbuf_req, fd, -1);
    if ( read_byte <= 0 ) {
        printf("[%2d] Close, read cnt: %llu\n", fd, pitem->total_read);
        __do_close(fd, pitem);
        return;
    }

    pitem->total_read += read_byte;

    if ( evbuffer_get_length(pitem->evbuf_req) > 10 * 1024 * 1024 ) {
        assert(0 == evbuffer_drain(pitem->evbuf_req, 10 * 1024 * 1024));
        printf("[%2d] Reset, read cnt: %llu\n", fd, pitem->total_read);
    }
}

同時,客戶端關閉連線或連線異常時,evbuffer_read將返回小於等於零,進行 __do_close 關閉連線、釋放空間;
static void __do_close(evutil_socket_t fd, item_t *pitem)
{
    evutil_closesocket(fd);

    evbuffer_free(pitem->evbuf_req);
    evbuffer_free(pitem->evbuf_rsp);

    event_del(pitem->ev_client);
    event_free(pitem->ev_client);

    pitem->evbuf_req = NULL;
    pitem->evbuf_rsp = NULL;
    pitem->ev_client = NULL;

    free(pitem);
}

四、測試與分析

測試,使用 iperf 充當客戶端進行發包處理:
iperf -c 127.0.0.1 -i 1 -n 11M -P 10
------------------------------------------------------------
Client connecting to 127.0.0.1, TCP port 5001
TCP window size:  648 KByte (default)
------------------------------------------------------------
[  9] local 127.0.0.1 port 44384 connected with 127.0.0.1 port 5001
[  3] local 127.0.0.1 port 44377 connected with 127.0.0.1 port 5001
[  4] local 127.0.0.1 port 44378 connected with 127.0.0.1 port 5001
[  5] local 127.0.0.1 port 44379 connected with 127.0.0.1 port 5001
[  6] local 127.0.0.1 port 44380 connected with 127.0.0.1 port 5001
[  8] local 127.0.0.1 port 44382 connected with 127.0.0.1 port 5001
[  7] local 127.0.0.1 port 44381 connected with 127.0.0.1 port 5001
[ 11] local 127.0.0.1 port 44385 connected with 127.0.0.1 port 5001
[ 12] local 127.0.0.1 port 44386 connected with 127.0.0.1 port 5001
[ 10] local 127.0.0.1 port 44383 connected with 127.0.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  9]  0.0- 0.3 sec  11.0 MBytes   325 Mbits/sec
[  3]  0.0- 0.3 sec  11.0 MBytes   328 Mbits/sec
[  4]  0.0- 0.3 sec  11.0 MBytes   265 Mbits/sec
[  6]  0.0- 0.3 sec  11.0 MBytes   266 Mbits/sec
[  7]  0.0- 0.3 sec  11.0 MBytes   265 Mbits/sec
[ 10]  0.0- 0.4 sec  11.0 MBytes   263 Mbits/sec
[  5]  0.0- 0.4 sec  11.0 MBytes   263 Mbits/sec
[  8]  0.0- 0.4 sec  11.0 MBytes   263 Mbits/sec
[ 11]  0.0- 0.4 sec  11.0 MBytes   259 Mbits/sec
[ 12]  0.0- 0.4 sec  11.0 MBytes   263 Mbits/sec
[SUM]  0.0- 0.4 sec   110 MBytes  2.59 Gbits/sec

./tcp_server -s 127.0.0.1 -p 5001
[L] Setup new connection 127.0.0.1:44377, fd: 8
[L] Setup new connection 127.0.0.1:44378, fd: 9
[L] Setup new connection 127.0.0.1:44379, fd: 10
[L] Setup new connection 127.0.0.1:44380, fd: 11
[L] Setup new connection 127.0.0.1:44381, fd: 12
[L] Setup new connection 127.0.0.1:44382, fd: 13
[L] Setup new connection 127.0.0.1:44383, fd: 14
[L] Setup new connection 127.0.0.1:44384, fd: 15
[L] Setup new connection 127.0.0.1:44385, fd: 16
[L] Setup new connection 127.0.0.1:44386, fd: 17
[15] Reset, read cnt: 10488600
[ 8] Reset, read cnt: 10485784
[ 9] Reset, read cnt: 10485784
[10] Reset, read cnt: 10485784
[11] Reset, read cnt: 10485784
[13] Reset, read cnt: 10485784
[15] Close, read cnt: 11534360
[12] Reset, read cnt: 10485784
[16] Reset, read cnt: 10485784
[17] Reset, read cnt: 10485784
[14] Reset, read cnt: 10485784
[ 8] Close, read cnt: 11534360
[ 9] Close, read cnt: 11534360
[10] Close, read cnt: 11534360
[11] Close, read cnt: 11534360
[13] Close, read cnt: 11534360
[12] Close, read cnt: 11534360
[16] Close, read cnt: 11534360
[17] Close, read cnt: 11534360
[14] Close, read cnt: 11534360

使用libevent 進行服務端程式設計非常方便,無須過多幹涉網路程式設計的細節,不過還得真正去理解網路處理的原理;

同時,evbuffer內部的緩衝處理也為上層應用提供了不少便利,能夠給專案開發帶來可觀的效率提升;

參考文章:

[1] http://blog.csdn.net/majianfei1023/article/details/46485705

[2] http://blog.csdn.net/historyasamirror/article/details/5778378

[3] http://blog.csdn.net/yusiguyuan/article/details/20458565

相關推薦

Linux使用libevent實現伺服器程式設計

一、背景 TCP伺服器端一般用到非阻塞Socket、IO複用,Linux可以支援epoll、select,而Windows支援select、IOCP,考慮平臺適用性,需要對IO事件進行封裝相容; 二、相關知識 2.1 事件驅動(I/O複用) 服務端常用到的 select

Linuxnfs+rpcbind實現伺服器之間的檔案共享

目前,越來越多的專案不再是單機,而是趨向於分散式部署,所以在分散式部署就需要檔案共享,例如A伺服器上傳的圖片,希望在B伺服器上也可以訪問。因此就需要跨機器共享檔案,在這裡就簡單的採用nfs+rpcbi

Linux----IO框架libevent的使用 (加IO函式的框架實現伺服器客戶通訊)

一、serlib.c程式為加io函式的框架庫實現伺服器客戶端通訊【不完善】 不完善體現在:多客戶端時,一個客戶端斷開連線後致使伺服器也直接斷開,中斷了其他客戶端與伺服器的連線通訊。 改進版參考以下serlib2.c程式。 服務端serlib.c #include &l

網路程式設計實驗四——利用多程序和多執行緒實現伺服器的併發處理

一、實驗目的 1.在TCP檔案傳輸程式碼的基礎上,利用多程序實現伺服器端的併發處理。  2.利用多執行緒實現伺服器端的併發處理。 二、實驗原理 併發的面向連線伺服器演算法: 主1、建立套接字並將其繫結到所提供服務的熟知地址上。讓該套接字保持為無連線的。 主2、將

網路程式設計——4.利用多程序和多執行緒實現伺服器的併發處理

一、實驗要求     在TCP檔案傳輸程式碼的基礎上,利用單執行緒程序併發模型和多執行緒併發模型實現伺服器端的併發處理。 二、實驗分析     多執行緒與多程序相比,使用多執行緒相比多程序有以下兩個優點:更高的效率和共享儲存器,效率的提高源於上下文切換次數的減少。

Linuxcat命令列印作用的程式設計實現

cat 命令用於連線檔案並列印到標準輸出裝置上。 現在我們用C語言程式設計實現cat命令的作用,程式碼如下: mycat.c #include <stdio.h> #include <fcntl.h> #include <unistd.h> #i

linuxlibevent安裝配置與簡介 以及 linux檔案搜尋路徑的配置

libevent簡介 libevent是基於Reactor模式的I/O框架庫,它具有良好的跨平臺性和執行緒安全,它實現了統一事件源(即對I/O事件、訊號和定時事件提供統一的處理)。高效能分散式記憶體物件快取軟體memcached是使用libevent的著名案例。 libev

Linux c語言ftp伺服器簡單實現

     這個程式轉載自http://aijiekj.blog.163.com/blog/static/12986678920112321853230/ 原來的程式沒有註釋,最近這段時間在學習網路程式設計這塊,就在網上找了個程式來學習,第一次找的程式下載下來後,編譯沒通過,

java-基本的Socket程式設計-實現伺服器和客戶通訊

基本的Socket程式設計: 本例項介紹Socket程式設計的基本步驟。啟動Socket服務後,再開啟Socket刻畫段,在輸入框中輸入訊息,然後傳送給伺服器端,伺服器端將收到的訊息返回到客戶端。 關鍵技術: Socket程式設計的關鍵技術如下; —–S

JAVA網路程式設計 ——基於TCP的Socket程式設計(1)——實現伺服器與客戶的實時對話

第一篇文章,我先介紹以及記錄一下我所學的知識點。(總結了一下視訊老師講的東西)一,HTTP與Socket1.HTTP:超文字傳輸協議特點:客戶端傳送的請求需要伺服器端每次來響應,在請求結束之後主動釋放連線,從建立連線到關閉連線稱為“一次連線”,所以HTTP是“短連線”。2.S

賴勇浩:推薦《Linux 多執行緒伺服器程式設計

推薦《Linux 多執行緒伺服器端程式設計》 賴勇浩(http://laiyonghao.com)最近,有一位朋友因為工作需要,需要從網遊的客戶端程式設計轉向伺服器端程式設計,找我推薦一本書。我推薦了《Linux 多執行緒伺服器端程式設計——使用 muduo C++ 網路庫

linux的簡單檔案伺服器和客戶程式

本文是我的一次作業,由於花了很多精力,記下來以後可能還會用到。程式碼部分是從老師那拷貝的,作業是實現程式碼中沒有實現的put和delete命令對檔案的操作。我對程式碼的理解都做了標註,有點亂,但閱讀方便。本程式的命令要求 Dir/ls 後接字串,列出伺服器的某個目錄的內容

LinuxC語言實現C/S模式程式設計(附原始碼,執行截圖)

由標題可知,這篇部落格主要講如何用C語言實現一個C/S模式的程式。 主要功能:時間回送。 客戶機發出請求,伺服器響應時間,並返回伺服器時間,與客戶機進行同步。 廢話不多說,下面直接貼出原始碼。 程式碼如下: #include <stdio.h> #include

linuxrsync+inotify實現兩臺伺服器檔案實時同步

假設兩個伺服器: 192.168.0.1 源伺服器  有目錄 /opt/test/ 192.168.0.2 目標伺服器  有目錄 /opt/bak/test/ 實現的目的就是保持這兩個伺服器某個檔案目錄保持實時同步 實現方式: 通過rsync+inotify-too

Linux多執行緒伺服器程式設計

目錄 Linux多執行緒伺服器端程式設計 執行緒安全的物件生命期管理 物件的銷燬執行緒比較難 執行緒同步精要 借shared_ptr實現寫時拷貝(copy-on-write)

linux讓irb實現代碼自己主動補全的功能

下載 article 一行代碼 技術 簡單 inux 我們 clu 童鞋 我不知道其它系統上irb是否有此功能,可是在ubuntu上ruby2.1.2自帶的irb默認是沒有代碼自己主動補全功能的,這多少讓人認為有所不便.事實上加上也非常easy,就是在irb裏載

LinuxFTPserver的實現(仿vsftpd)

stat 通信 ip地址 啟動 思想 ipp size_t ascii 上傳 繼上一篇博文實現Linux下的shell後,我們進一步利用網絡編程和系統編程的知識實現Linux下的FTPserver。我們以vsftpd為原型並實現了其大部分的功能。因為篇幅和時間的關系

Linux

都是 配置 建議 大堆 什麽 .lib 保持 大型項目 做到 Linux下的庫庫,無論在什麽平臺下,都只有兩種,一種是:動態庫,可以多個共享靜態庫,直接編譯到軟件之中這種形式,無論在哪裏,都不會發生改變!但是,可以會因為平臺的不同,而有所不同,例如:在windwos下是:.

linux使用rzsz實現文件的上傳和下載

輸入 ssh登錄 usr 終端 啟動 mode 裝包 ftw soft 新搞的雲服務器用SecureCRT不支持上傳和下載,沒有找到rz命令。記錄一下如何安裝rz/sz命令的方法。 一、工具說明 在SecureCRT這樣的ssh登錄軟件裏, 通過在L

arm linux編譯System.Net.Primitives.dll和System.Xml.XmlSerializer.dll

cad serial linu 5.4 mcs download 切換 mon dll 1.環境: /home/jello # uname -aLinux 3.10.0 #2 SMP Mon Mar 6 17:52:09 CST 2017 armv7l GNU/Linux