Segfault与ulimit

在Linux上,我有一个程序只有在ulimit -s设置为无限制时才会崩溃。 segfaults的位置在Libmicrohttpd的连接回调中,所以回溯非常深(大约有10个功能)。 无论我在回调中首先调用的函数是哪里崩溃,即使它只是printf。 这是来自coredump的堆栈跟踪:

#0  0x000000341fa44089 in vfprintf () from /lib64/libc.so.6
#1  0x000000341fa4ef58 in fprintf () from /lib64/libc.so.6
#2  0x000000000044488d in answer_to_connection (cls=0x7fffc57b0170, connection=0x2b59bc0008c0, url=0x2b59bc000a84 "/remote.html",     method=0x2b59bc000a80 "GET", version=0x2b59bc000a9f "HTTP/1.0", upload_data=0x0, upload_data_size=0x2b59b94247b8, con_cls=0x2b59bc000918) at network.c:149
#3  0x00000000004f7f9f in call_connection_handler (connection=connection@entry=0x2b59bc0008c0) at../../../src/microhttpd/connection.c:2284
#4  0x00000000004f92f8 in MHD_connection_handle_idle (connection=connection@entry=0x2b59bc0008c0) at ../../../src/microhttpd/connection.c:3361
#5  0x00000000004fae81 in call_handlers (con=con@entry=0x2b59bc0008c0, read_ready=<optimized out>, write_ready=<optimized out>, force_close=<optimized out>) at ../../../src/microhttpd/daemon.c:1113
#6  0x00000000004fd93b in thread_main_handle_connection (data=0x2b59bc0008c0) at ../../../src/microhttpd/daemon.c:1965
#7  0x0000003420607aa1 in start_thread () from /lib64/libpthread.so.0
#8  0x000000341fae8bcd in clone () from /lib64/libc.so.6

如果我将ulimit -s设置为8192,则一切正常。 我习惯于错误地增加堆栈大小。 但是为什么它会与一个较小的堆栈一起工作,并且无限制地失败?

编辑:

它肯定与线程有关。 简单的例子:

#include <pthread.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>

void function(char arg){
  char buffer[666666];

  if(arg > 0){
    memset(buffer, arg, 6666);
    fprintf(stderr, "DONE %pn", &buffer);
    function(arg - 1);
  }
}

void *thread(void *arg){
  int a;

  fprintf(stderr, "THREAD %pn", &a);

  function(6);

  return NULL;
}

int main(){
  int i;
  pthread_t p;

  fprintf(stderr, "MAIN %pn", &i);

  pthread_create(&p, NULL, thread, NULL);

  pthread_join(p, NULL);
}

用ulimit -s 8192:

$ ./test 
MAIN 0x7ffd73f9cc5c
THREAD 0x7fa4d12bfeac
DONE 0x7fa4d121d250
DONE 0x7fa4d117a600
DONE 0x7fa4d10d79b0
DONE 0x7fa4d1034d60
DONE 0x7fa4d0f92110
DONE 0x7fa4d0eef4c0

随着ulimit -s无限

$ ./test 
MAIN 0x7ffd1438d4dc
THREAD 0x2ab91aef6eac
DONE 0x2ab91ae54250
DONE 0x2ab91adb1600
DONE 0x2ab91ad0e9b0
Segmentation fault (core dumped)

正如Mark Plotnick解释的那样,问题来自线程的默认堆栈大小。 ulimit -s unlimited给你一个2MB的堆栈大小,按照今天的标准来看是一个小的大小。

仔细研究pthread代码,在__pthread_initialize_minimal_internal()

 /* Determine the default allowed stack size.  This is the size used
     in case the user does not specify one.  */
  struct rlimit limit;
  if (__getrlimit (RLIMIT_STACK, &limit) != 0
      || limit.rlim_cur == RLIM_INFINITY)
    /* The system limit is not usable.  Use an architecture-specific
       default.  */
    limit.rlim_cur = ARCH_STACK_DEFAULT_SIZE;
  else if (limit.rlim_cur < PTHREAD_STACK_MIN)
    /* The system limit is unusably small.
       Use the minimal size acceptable.  */
    limit.rlim_cur = PTHREAD_STACK_MIN;

  /* Make sure it meets the minimum size that allocate_stack
     (allocatestack.c) will demand, which depends on the page size.  */
  const uintptr_t pagesz = GLRO(dl_pagesize);
  const size_t minstack = pagesz + __static_tls_size + MINIMAL_REST_STACK;
  if (limit.rlim_cur < minstack)
    limit.rlim_cur = minstack;

  /* Round the resource limit up to page size.  */
  limit.rlim_cur = ALIGN_UP (limit.rlim_cur, pagesz);
  lll_lock (__default_pthread_attr_lock, LLL_PRIVATE);
  __default_pthread_attr.stacksize = limit.rlim_cur;
  __default_pthread_attr.guardsize = GLRO (dl_pagesize);
  lll_unlock (__default_pthread_attr_lock, LLL_PRIVATE);

pthread_create(3)的手册页仅提及Linux / x86-32的默认堆栈大小,但这里是glibc 2.3.3及更高版本中其他体系结构的值( ARCH_STACK_DEFAULT_SIZE ):

  • Sparc-32:2MB
  • Sparc-64:4MB
  • PowerPC:4MB
  • S / 390:2MB
  • IA-64:32MB
  • i386:2MB
  • x86_64:2MB
  • 我已向该手册页提交了一个补丁以包含此信息。 再次感谢您在调查此问题方面的帮助。

    链接地址: http://www.djcxy.com/p/80259.html

    上一篇: Segfault with ulimit

    下一篇: How to use malloc() allocates memory more than RAM in redhat?