最近我们为了安全方面的原因,在RDS服务器上做了个代理程序把普通的MYSQL TCP连接变成了SSL链接,在测试的时候,皓庭同学发现Tsung发起了几千个TCP链接后Erlang做的SSL PROXY老是报告gen_tcp:accept返回{error, enfile}错误。针对这个问题,我展开了如下的调查:

首先man accept手册,确定enfile的原因,因为gen_tcp肯定是调用accept系统调用的:

    EMFILE The per-process limit of open file descriptors has been reached.
    ENFILE The system limit on the total number of open files has been reached.


$ uname -r
$ cat /proc/sys/fs/file-nr
2040    0       2417338
$ ulimit -n

由于我们微调了系统的文件句柄,具体参考这里 老生常谈: ulimit问题及其影响, 这些参数看起来非常的正常。

static int sock_alloc_fd(struct file **filep)
        int fd;
        fd = get_unused_fd();
        if (likely(fd >= 0)) {
                struct file *file = get_empty_filp();
                *filep = file;
                if (unlikely(!file)) {
                        return -ENFILE;
        } else
                *filep = NULL;
        return fd;
static int __sock_create(int family, int type, int protocol, struct socket **res, int kern)
 *      Allocate the socket and allow the family to set things up. if                                                                                                  
 *      the protocol is 0, the family is instructed to select an appropriate                                                                                           
 *      default.                                                                                                                                                       
        if (!(sock = sock_alloc())) {
                if (net_ratelimit())
                        printk(KERN_WARNING "socket: no more sockets\n");
                err = -ENFILE;          /* Not exactly a match, but its the                                                                                            
                                           closest posix thing */
                goto out;
asmlinkage long sys_accept(int fd, struct sockaddr __user *upeer_sockaddr, int __user *upeer_addrlen)
        struct socket *sock, *newsock;
        struct file *newfile;
        int err, len, newfd, fput_needed;
        char address[MAX_SOCK_ADDR];
        sock = sockfd_lookup_light(fd, &err, &fput_needed);
        if (!sock)
                goto out;
        err = -ENFILE;                
        if (!(newsock = sock_alloc()))
                goto out_put;


$ cat enfile.stp
probe kernel.function("kmem_cache_alloc").return,
  if($return == 0) { print_backtrace();exit();}
probe kernel.function("sock_alloc_fd").return {
  if($return < 0) { print_backtrace(); exit();}
probe syscall.accept.return {
  if($return == -23) {print_backtrace(); exit();}
probe begin {
$ sudo stap enfile.stp

gen_tcp:accept报告{error, enfile}的时候,也没看到stap报异常,基本上可以排除操作系统的原因了,那么我们现在回到gen_tcp的实现来看。
gen_tcp是个port, 具体实现在erts/emulator/drivers/common/inet_drv.c,我们来看下有ENFILE的地方:

/* Copy a descriptor, by creating a new port with same settings                                                                                                        
 * as the descriptor desc.                                                                                                                                             
 * return NULL on error (ENFILE no ports avail)                                                                                                                        
static tcp_descriptor* tcp_inet_copy(tcp_descriptor* desc,SOCKET s,
                                     ErlDrvTermData owner, int* err)
    /* The new port will be linked and connected to the original caller */
    port = driver_create_port(port, owner, "tcp_inet", (ErlDrvData) copy_desc);
    if ((long)port == -1) {
        *err = ENFILE;
        return NULL;

当 driver_create_port 失败的时候,gen_tcp返回ENFILE,看起来这次找对地方了。我们继续看下 driver_create_port的实现:

 * Driver function to create new instances of a driver                                                                                                                 
 * Historical reason: to be used with inet_drv for creating                                                                                                            
 * accept sockets inorder to avoid a global table.                                                                                                                     
driver_create_port(ErlDrvPort creator_port_ix, /* Creating port */
                   ErlDrvTermData pid,    /* Owner/Caller */
                   char* name,            /* Driver name */
                   ErlDrvData drv_data)   /* Driver data */
    rp = erts_pid2proc(NULL, 0, pid, ERTS_PROC_LOCK_LINK);
    if (!rp) {
        return (ErlDrvTermData) -1;   /* pid does not exist */
    if ((port_num = get_free_port()) < 0) {
        errno = ENFILE;
        erts_smp_proc_unlock(rp, ERTS_PROC_LOCK_LINK);
        return (ErlDrvTermData) -1;
    port_id = make_internal_port(port_num);
    port = &erts_port[port_num & erts_port_tab_index_mask];


/* initialize the port array */
void init_io(void)
 if (erts_sys_getenv("ERL_MAX_PORTS", maxports, &maxportssize) == 0)
        erts_max_ports = atoi(maxports);
        erts_max_ports = sys_max_files();
    if (erts_max_ports > ERTS_MAX_PORTS)
        erts_max_ports = ERTS_MAX_PORTS;
    if (erts_max_ports < 1024)
        erts_max_ports = 1024;
    if (erts_use_r9_pids_ports) {
        ports_bits = ERTS_R9_PORTS_BITS;
        if (erts_max_ports > ERTS_MAX_R9_PORTS)
            erts_max_ports = ERTS_MAX_R9_PORTS;
    port_extra_shift = erts_fit_in_bits(erts_max_ports – 1);
    port_num_mask = (1 << ports_bits) – 1;

第一步:如果设定了ERL_MAX_PORTS环境变量,那么就按照用户设定的,否则就和ulimit -n 一样大。

好了,我们基本上明白这个问题的原因了: erts_max_ports设定的太小.

gdb attach到我们的进程下
(gdb) p erts_max_ports
$1 = 4096

解决方案是: erl -env ERTS_MAX_PORTS NNNN 搞大点就好。


    This page lists several tricks to tune your ejabberd and Erlang installation for maximum performance gains. Remark that some of the described options are experimental.

    Erlang Ports Limit: ERL_MAX_PORTS
    Erlang consumes one port for every connection, either from a client or from another Jabber server. The option ERL_MAX_PORTS limits the number of concurrent connections and can be specified when starting ejabberd:

    erl -s ejabberd -env ERL_MAX_PORTS 5000 …

    Maximum Number of Erlang Processes: +P
    Erlang consumes a lot of lightweight processes. If there is a lot of activity on ejabberd so that the maximum number of proccesses is reached, people will experiment greater latency times. As these processes are implemented in Erlang, and therefore not related to the operating system processes, you do not have to worry about allowing a huge number of them.

    erl -s ejabberd +P 250000 …

    ERL_FULLSWEEP_AFTER: Maximum number of collections before a forced fullsweep
    The ERL_FULLSWEEP_AFTER option shrinks the size of the Erlang process after RAM intensive events. Note that this option may downgrade performance. Hence this option is only interesting on machines that host other services (webserver, mail) on which ejabberd does not receive constant load.

    erl -s ejabberd -env ERL_FULLSWEEP_AFTER 0 …

    Kernel Polling: +K true

    The kernel polling option requires that you have support for it in your kernel. By default, Erlang currently supports kernel polling under FreeBSD, Mac OS X, and Solaris. If you use Linux, check this newspost. Additionaly, you need to enable this feature while compiling Erlang.

    From Erlang documentation -> Basic Applications -> erts -> erl -> System Flags:

    +K true|false

    Enables or disables the kernel poll functionality if the emulator has kernel poll support. By default the kernel poll; functionality is disabled. If the emulator doesn't have kernel poll support and the +K flag is passed to the emulator, a warning is issued at startup.

    If you meet all requirements, you can enable it in this way:

    erl -s ejabberd +K true …

    Mnesia Tables to Disk
    By default, ejabberd uses Mnesia as its database. In Mnesia you can configure each table in the database to be stored on RAM, on RAM and on disk, or only on disk. You can configure this in the web interface: Nodes -> 'mynode' -> DB Management. Modification of this option will consume some memory and CPU time.
    Number of Concurrent ETS and Mnesia Tables: ERL_MAX_ETS_TABLES
    The number of concurrent ETS and Mnesia tables is limited. When the limit is reached, errors will appear in the logs:

    ** Too many db tables **

    You can safely increase this limit when starting ejabberd. It impacts memory consumption but the difference will be quite small.

    erl -s ejabberd -env ERL_MAX_ETS_TABLES 20000 …