使用 memcached DRDoS 攻擊 Github

image.png

聽說3月1日 GitHub 被DDoS攻擊了,好像挺嚴(yán)重的。

來看看怎么使用mc攻擊~

mc 首先通過cmd line指定UDP端口,然后初始化libevent實(shí)例,初始化線程,

int main (int argc, char **argv) {
...
  settings_init();
...
  while (-1 != (c = getopt(argc, argv,
    ...
    "U:"  /* UDP port number to listen on */
    ...
    case 'U':
            settings.udpport = atoi(optarg);
            udp_specified = true;
            break;
    ...
  ))) {
...
  if (tcp_specified && !udp_specified) {
      settings.udpport = settings.port;
   } else if (udp_specified && !tcp_specified) {
      settings.port = settings.udpport;
   }
...
  main_base = event_init();
...
  thread_init(settings.num_threads, main_base);
...
/* create unix mode sockets after dropping privileges */
    if (settings.socketpath != NULL) {
        errno = 0;
        if (server_socket_unix(settings.socketpath,settings.access)) {
            vperror("failed to listen on UNIX socket: %s", settings.socketpath);
            exit(EX_OSERR);
        }
    }

    /* create the listening socket, bind it, and init */
    if (settings.socketpath == NULL) {
        ...
        // TCP
        errno = 0;
        if (settings.port && server_sockets(settings.port, tcp_transport,
                                           portnumber_file)) {
            vperror("failed to listen on TCP port %d", settings.port);
            exit(EX_OSERR);
        }

        /*
         * initialization order: first create the listening sockets
         * (may need root on low ports), then drop root if needed,
         * then daemonise if needed, then init libevent (in some cases
         * descriptors created by libevent wouldn't survive forking).
         */

        /* create the UDP listening socket and bind it */
        errno = 0;
        if (settings.udpport && server_sockets(settings.udpport, udp_transport,
                                              portnumber_file)) {
            vperror("failed to listen on UDP port %d", settings.udpport);
            exit(EX_OSERR);
        }
        ...
        /* enter the event loop */
    if (event_base_loop(main_base, 0) != 0) {
        retval = EXIT_FAILURE;
    }
...
    }

在此之前初始化了一些設(shè)置,可以看到默認(rèn)端口是11211,有4個(gè)worker線程。

static void settings_init(void) {
...
    settings.port = 11211;
    settings.udpport = 11211;
    /* By default this string should be NULL for getaddrinfo() */
    settings.inter = NULL;
    settings.maxbytes = 64 * 1024 * 1024; /* default is 64MB */
    ...
    settings.chunk_size = 48;         /* space for a modest key and value */
    settings.num_threads = 4;         /* N workers */
    ...
}

可以檢查一下:

$ echo "stats settings" | nc localhost 11211
STAT maxbytes 67108864
STAT maxconns 1024
STAT tcpport 11211
STAT udpport 11211
STAT inter NULL
...
STAT chunk_size 48
STAT num_threads 4
...
END

隨后線程初始化,main_base 是分發(fā)任務(wù)的主線程,創(chuàng)建管道用于libevent通知。主要調(diào)用了setup_thread初始化線程信息數(shù)據(jù)結(jié)構(gòu),最后創(chuàng)建并初始化線程,代碼段都是 worker_libevent。

void thread_init(int nthreads, struct event_base *main_base) {
...
    threads = calloc(nthreads, sizeof(LIBEVENT_THREAD));
    if (! threads) {
        perror("Can't allocate thread descriptors");
        exit(1);
    }

    dispatcher_thread.base = main_base;
    dispatcher_thread.thread_id = pthread_self();

    for (i = 0; i < nthreads; i++) {
        int fds[2];
        if (pipe(fds)) {
            perror("Can't create notify pipe");
            exit(1);
        }

        threads[i].notify_receive_fd = fds[0];
        threads[i].notify_send_fd = fds[1];

        setup_thread(&threads[i]);
        /* Reserve three fds for the libevent base, and two for the pipe */
        stats.reserved_fds += 5;
    }

    /* Create threads after we've done all the libevent setup. */
    for (i = 0; i < nthreads; i++) {
        create_worker(worker_libevent, &threads[i]);
    }
...
}

這里看到了 thread_libevent_process 指針,在設(shè)置線程初始化數(shù)據(jù)時(shí),設(shè)置為me->notify_receive_fd 管道的libevent讀事件。

static void setup_thread(LIBEVENT_THREAD *me) {
    me->base = event_init();
    if (! me->base) {
        fprintf(stderr, "Can't allocate event base\n");
        exit(1);
    }

    /* Listen for notifications from other threads */
    event_set(&me->notify_event, me->notify_receive_fd,
              EV_READ | EV_PERSIST, thread_libevent_process, me);
    event_base_set(me->base, &me->notify_event);

    if (event_add(&me->notify_event, 0) == -1) {
        fprintf(stderr, "Can't monitor libevent notify pipe\n");
        exit(1);
    }

    me->new_conn_queue = malloc(sizeof(struct conn_queue));
    if (me->new_conn_queue == NULL) {
        perror("Failed to allocate memory for connection queue");
        exit(EXIT_FAILURE);
    }
    cq_init(me->new_conn_queue);
...
}

當(dāng)管道可讀時(shí)回調(diào)此函數(shù)。從隊(duì)列中取出一個(gè)任務(wù),隨后調(diào)conn_new。

static void thread_libevent_process(int fd, short which, void *arg) {
    LIBEVENT_THREAD *me = arg;
    CQ_ITEM *item;
    char buf[1];

    if (read(fd, buf, 1) != 1)
        if (settings.verbose > 0)
            fprintf(stderr, "Can't read from libevent pipe\n");

    switch (buf[0]) {
    case 'c':
    item = cq_pop(me->new_conn_queue);

    if (NULL != item) {
        conn *c = conn_new(item->sfd, item->init_state, item->event_flags,
                           item->read_buffer_size, item->transport, me->base);
...
    }
}

conn_new為新的請(qǐng)求建立一個(gè)連接結(jié)構(gòu)體。這里只填充conn結(jié)構(gòu)體。主要在 libevent 中注冊(cè)函數(shù)指針event_handler。

conn *conn_new(const int sfd, enum conn_states init_state,
                const int event_flags,
                const int read_buffer_size, enum network_transport transport,
                struct event_base *base) {
    {
        /* data */
    };
    conn *c = conn_from_freelist();

    if (NULL == c) {
        if (!(c = (conn *)calloc(1, sizeof(conn)))) {
            fprintf(stderr, "calloc()\n");
            return NULL;
        }

        MEMCACHED_CONN_CREATE(c);

        c->rbuf = c->wbuf = 0;
        c->rbuf = (char *)malloc((size_t)c->rsize);
        c->wbuf = (char *)malloc((size_t)c->wsize);
...
        c->msglist = (struct msghdr *)malloc(sizeof(struct msghdr) * c->msgsize);
...
    }// if
....
    c->sfd = sfd;
...
    c->item = 0;
...
    event_set(&c->event, sfd, event_flags, event_handler, (void *)c);

    event_base_set(base, &c->event);

    c->ev_flags = event_flags;

    if (event_add(&c->event, 0) == -1) {
...
    }
...
    return c;
}

當(dāng)有新的連接的時(shí)候?qū)?huì)回調(diào)此函數(shù)。

void event_handler(const int fd, const short which, void *arg) {
    conn *c;

    c = (conn *)arg;
    assert(c != NULL);

    c->which = which;

    /* sanity */
...
    drive_machine(c);
    return;
}

client connect 后,memcached server主線程被喚醒,然后調(diào)用event_handler()->drive_machine(),進(jìn)入這個(gè)狀態(tài)機(jī)。從別處代碼看,只有tcp或UNIX域套接字才會(huì)進(jìn)行conn_listening,即accept過程。conn_waiting等待新的命令請(qǐng)求,conn_read 為讀取數(shù)據(jù),讀完請(qǐng)求后轉(zhuǎn)換 conn 的狀態(tài),然后就是解析執(zhí)行命令咯。在conn_mwrite狀態(tài)下回復(fù)數(shù)據(jù);在transmit中最終調(diào)用sendmsg寫給套接字。

static void drive_machine(conn *c) {
    bool stop = false;
    int sfd, flags = 1;
    socklen_t addrlen;
    struct sockaddr_storage addr;
    int nreqs = settings.reqs_per_event;
    int res;
    const char *str;

    assert(c != NULL);

    while (!stop) {
        switch(c->state) {
        case conn_listening:
            addrlen = sizeof(addr);

            if ((sfd = accept(c->sfd, (struct sockaddr *)&addr, &addrlen)) == -1) {
...
            }
...
        case conn_waiting:
            if (!update_event(c, EV_READ | EV_PERSIST)) {
                if (settings.verbose > 0)
                    fprintf(stderr, "Couldn't update event\n");
                conn_set_state(c, conn_closing);
                break;
            }

            conn_set_state(c, conn_read);
            stop = true;
            break;

        case conn_read:
            res = IS_UDP(c->transport) ? try_read_udp(c) : try_read_network(c);
            switch (res) {
            case READ_NO_DATA_RECEIVED:
                conn_set_state(c, conn_waiting);
                break;
                ...
            }
            break;

        case conn_parse_cmd :
            if (try_read_command(c) == 0) {
                /* we need more data! */
                conn_set_state(c, conn_waiting);
            }

            break;
        ...
        case conn_nread:
            if (c->rlbytes == 0) {
                complete_nread(c);
                break;
            }

            /* first check if we have leftovers in the conn_read buffer */
            if (c->rbytes > 0) {
                int tocopy = c->rbytes > c->rlbytes ? c->rlbytes : c->rbytes;
                if (c->ritem != c->rcurr) {
                    memmove(c->ritem, c->rcurr, tocopy);
                }
                ...
            }

            /*  now try reading from the socket */
            res = read(c->sfd, c->ritem, c->rlbytes);
...
        case conn_write:
            ...
            /* fall through... */

        case conn_mwrite:

          if (IS_UDP(c->transport) && c->msgcurr == 0 && build_udp_headers(c) != 0) {
            if (settings.verbose > 0)
              fprintf(stderr, "Failed to build UDP headers\n");
            conn_set_state(c, conn_closing);
            break;
          }
            switch (transmit(c)) {
            case TRANSMIT_COMPLETE:
                if (c->state == conn_mwrite) {
                    ...
                    /* XXX:  I don't know why this wasn't the general case */
                    if(c->protocol == binary_prot) {
                        conn_set_state(c, c->write_and_go);
                    } else {
                        ...
                    }

            case TRANSMIT_INCOMPLETE:
            case TRANSMIT_HARD_ERROR:
                break;                   /* Continue in state machine. */

            case TRANSMIT_SOFT_ERROR:
                stop = true;
                break;
            }
            break;
        ...
        case conn_closing:
            if (IS_UDP(c->transport))
                conn_cleanup(c);
            else
                conn_close(c);
            stop = true;
            break;
          ...
    }
    return;
}

上文用到的讀取UDP,直接調(diào)recvfrom,此處從客戶端接受數(shù)據(jù),將讀取到的指令放到rbuf中。

static enum try_read_result try_read_udp(conn *c) {
    int res;

    assert(c != NULL);

    c->request_addr_size = sizeof(c->request_addr);

    res = recvfrom(c->sfd, c->rbuf, c->rsize,
                   0, &c->request_addr, &c->request_addr_size);
...
        memmove(c->rbuf, c->rbuf + 8, res);

        c->rbytes = res;
        c->rcurr = c->rbuf;
        return READ_DATA_RECEIVED;
    }
    return READ_NO_DATA_RECEIVED;
}

主函數(shù)中配置的模式,允許客戶端以幾種方式向mc server發(fā)請(qǐng)求 UDP只要綁定之后,直接讀取 sfd 就OK,在這里看出它 conn 初始狀態(tài)應(yīng)為 conn_read,而 TCP 對(duì)應(yīng)的 conn 初始狀態(tài)應(yīng)該為 conn_listening。

static int server_sockets(int port, enum network_transport transport,
                          FILE *portnumber_file) {
    if (settings.inter == NULL) {
        return server_socket(settings.inter, port, transport, portnumber_file);
    } else {
        // tokenize them and bind to each one of them..
        char *b;
        int ret = 0;

        char *list = strdup(settings.inter);

        if (list == NULL) {
            fprintf(stderr, "Failed to allocate memory for parsing server interface string\n");
            return 1;
        }

        for (char *p = strtok_r(list, ";,", &b);
            ...
            ret |= server_socket(p, the_port, transport, portnumber_file);
        }
        free(list);
        return ret;
    }
}

針對(duì)每個(gè)interface綁定。

static int server_socket(const char *interface,
                         int port,
                         enum network_transport transport,
                         FILE *portnumber_file) {
...
    hints.ai_socktype = IS_UDP(transport) ? SOCK_DGRAM : SOCK_STREAM;

    if (port == -1) {
        port = 0;
    }
    snprintf(port_buf, sizeof(port_buf), "%d", port);

    error= getaddrinfo(interface, port_buf, &hints, &ai);
    ...
    for (next= ai; next; next= next->ai_next) {
        conn *listen_conn_add;

        if ((sfd = new_socket(next)) == -1) {
            ...
            continue;
        }

#ifdef IPV6_V6ONLY
      ...
#endif

        setsockopt(sfd, SOL_SOCKET, SO_REUSEADDR, (void *)&flags, sizeof(flags));

        if (IS_UDP(transport)) {
            maximize_sndbuf(sfd);
        } else {
            ...
        }

        if (bind(sfd, next->ai_addr, next->ai_addrlen) == -1) {
            ...
        } else {
            success++;
            if (!IS_UDP(transport) && listen(sfd, settings.backlog) == -1) {
              ...
            }

        if (IS_UDP(transport)) {
            // UDP
            int c;

            for (c = 0; c < settings.num_threads_per_udp; c++) {
                /* this is guaranteed to hit all threads because we round-robin */
                dispatch_conn_new(sfd, conn_read, EV_READ | EV_PERSIST,
                                  UDP_READ_BUFFER_SIZE, transport);
            }
        } else {
            if (!(listen_conn_add = conn_new(sfd, conn_listening,
                                             EV_READ | EV_PERSIST, 1,
                                             transport, main_base))) {
               ...
            }

            listen_conn_add->next = listen_conn;
            listen_conn = listen_conn_add;
        }
    }

    freeaddrinfo(ai);

    /* Return zero iff we detected no errors in starting up connections */
    return success == 0;
}

設(shè)置了socket的發(fā)送緩沖大小為,取默認(rèn)值,然后和設(shè)置的最大值二分查找,取最后的最大值。

/*
 * Sets a socket's send buffer size to the maximum allowed by the system.
 */
// defined somewhere else
#define MAX_SENDBUF_SIZE (256 * 1024 * 1024)

static void maximize_sndbuf(const int sfd) {
    ...
    if (getsockopt(sfd, SOL_SOCKET, SO_SNDBUF, &old_size, &intsize) != 0) {
      ...
    }

    min = old_size;
    max = MAX_SENDBUF_SIZE;

    while (min <= max) {
        avg = ((unsigned int)(min + max)) / 2;
        if (setsockopt(sfd, SOL_SOCKET, SO_SNDBUF, (void *)&avg, intsize) == 0) {
            last_good = avg;
            min = avg + 1;
        } else {
            max = avg - 1;
        }
    }
    ...
}

然后分發(fā)新的連接到線程池中的一個(gè)線程中,就是在一個(gè)線程的wq中加入一個(gè)任務(wù),并通過管道給相應(yīng)的線程發(fā)信,向一個(gè)休眠線程寫字符,已注冊(cè)事件會(huì)被觸發(fā),隨后調(diào)thread_libevent_process(上文setup_thread 中線程pd被設(shè)置到 event 中)

void dispatch_conn_new(int sfd, enum conn_states init_state, int event_flags,
                       int read_buffer_size, enum network_transport transport) {
    // CQ_ITEM connection queue item
    CQ_ITEM *item = cqi_new();
    char buf[1];

    int tid = (last_thread + 1) % settings.num_threads;

    LIBEVENT_THREAD *thread = threads + tid;
    ...

    cq_push(thread->new_conn_queue, item);

    MEMCACHED_CONN_DISPATCH(sfd, thread->thread_id);

    buf[0] = 'c';
    if (write(thread->notify_send_fd, buf, 1) != 1) {
        ...
    }

}

這里看到mc可通過UDP模式將放大的數(shù)據(jù)返回給client,所以可以利用這個(gè)特性執(zhí)行攻擊,利用網(wǎng)絡(luò)上的mc放大攻擊效果。

看一下協(xié)議:RFC768

                         User Datagram Protocol
                         ----------------------
...
protocol  is transaction oriented, and delivery and duplicate protection
are not guaranteed.  Applications requiring ordered reliable delivery of
streams of data should use the Transmission Control Protocol (TCP) [2].
Format
------


                  0      7 8     15 16    23 24    31
                 +--------+--------+--------+--------+
                 |     Source      |   Destination   |
                 |      Port       |      Port       |
                 +--------+--------+--------+--------+
                 |                 |                 |
                 |     Length      |    Checksum     |
                 +--------+--------+--------+--------+
                 |
                 |          data octets ...
                 +---------------- ...

                      User Datagram Header Format

Fields
------

Length字段占2字節(jié)。所以UDP協(xié)議單次最大發(fā)送數(shù)據(jù)為2 ^ 16 = 65535 = 64KB。UDP協(xié)議不基于連接,可直接發(fā)送數(shù)據(jù)報(bào)到目標(biāo)機(jī)器。因?yàn)閁DP協(xié)議無連接,直接發(fā)數(shù)據(jù)到target,不需三次握手。target也不好驗(yàn)證客戶源IP。

我們先批量set 多一點(diǎn)大value到遠(yuǎn)程開放 memcached server上,過期也設(shè)置長(zhǎng)一點(diǎn),然后利用UDP偽造源地址在memcached server get 存儲(chǔ)的value,請(qǐng)求時(shí)間段盡量集中,這樣就將數(shù)據(jù)通過mc server Reflect 到target,實(shí)現(xiàn)DRDoS過程。

2月底,dormando Release了 1.5.6,該版本默認(rèn)關(guān)閉了UDP啟動(dòng):
https://groups.google.com/forum/#!topic/memcached/pu6LAIbL_Ks

若想預(yù)防,可以升級(jí)新版,也可以網(wǎng)絡(luò)層做限制。也可以啟動(dòng) memcached 加入 -U 0啟動(dòng)參數(shù),表達(dá)式短路后就不會(huì)server_sockets,禁止監(jiān)聽udp協(xié)議。


Linkerist
2018年3月4日于北京街角的咖啡店

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • 1、memcache的概念? Memcache是一個(gè)高性能的分布式的內(nèi)存對(duì)象緩存系統(tǒng),通過在內(nèi)存里維護(hù)一個(gè)統(tǒng)一的巨...
    桖辶殤閱讀 2,351評(píng)論 2 12
  • memcached 源碼閱讀筆記 閱讀 memcached 最好有 libevent 基礎(chǔ), memcached ...
    lcode閱讀 707評(píng)論 0 3
  • 一、MemCache簡(jiǎn)介 session MemCache是一個(gè)自由、源碼開放、高性能、分布式的分布式內(nèi)存對(duì)象緩存...
    李偉銘MIng閱讀 3,993評(píng)論 2 13
  • 進(jìn)程“進(jìn)程” 兩字,我第一次看到應(yīng)該是在用Windows系統(tǒng)時(shí),當(dāng)我的軟件卡死,沒辦法退出,我會(huì)調(diào)出任務(wù)管理器,然...
    榕樹頭閱讀 329評(píng)論 0 0
  • 這個(gè)周,啥也不想做!就想一個(gè)人呆在家里靜靜地看書、整理、畫畫!
    圖媽0302閱讀 174評(píng)論 0 0

友情鏈接更多精彩內(nèi)容