About catching the SIGSEGV in multithreaded environment

I'd like to know if it is possible/the recommended way to catch the SIGSEGV signal in multithreaded environment. I am particularly interested in handling the SIGSEGV raised by something like '*((int *)0) = 0'. Some reading on this topic led me to signal() and sigaction(), which install a signal handler. While neither seem promising in multithreaded environment. I then tried the s

关于在多线程环境中捕获SIGSEGV

我想知道在多线程环境中是否有可能/推荐使用SIGSEGV信号。 我特别感兴趣的是处理由'*((int *)0)= 0'之类的东西引发的SIGSEGV。 关于这个主题的一些阅读让我发信号()和sigaction(),它们安装了一个信号处理程序。 虽然在多线程环境中看起来并不乐观。 然后我尝试了sigwaitinfo(),它在一个线程中接收到信号,并且先前调用了阻塞其他信号的pthread_sigmask()调用。 它在SIGSEGV信号被引发的程度上使用ra

How can we know the minimum stack size needed by a program launched with exec()?

In an attempt to avoid stack clash attacks against a program, we tried to set a limit on the stack size with setrlimit(RLIMIT_STACK) to about 2 MB. This limit is fine for our program's own internal needs, but we then noticed that attempts to exec() external programs began to fail on some systems with this new limit. One system we investigated using the test program below seems to have a mi

我们如何知道用exec()启动的程序所需的最小堆栈大小?

为了避免针对某个程序的堆栈冲突攻击,我们尝试使用setrlimit(RLIMIT_STACK)将堆栈大小setrlimit(RLIMIT_STACK)为大约2 MB。 这个限制对我们程序自身的内部需求来说很好,但是我们注意到exec()外部程序的尝试在一些具有这个新限制的系统上开始失败。 我们使用下面的测试程序研究的一个系统似乎对exec()的程序的最小堆栈大小超过4 MiB。 我的问题是,我们如何知道给定系统上堆栈大小的安全最小值,以便exec()不会失败? 我

set stack size for threads using setrlimit

I'm using a library which creates a pthread using the default stack size of 8MB. Is it possible to programatically reduce the stack size of the thread the library creates? I tried using setrlimit(RLIMIT_STACK...) inside my main() function, but that doesn't seem to have any effect. ulimit -s seems to do the job, but I don't want to set the stack size before my program is executed.

使用setrlimit为线程设置堆栈大小

我正在使用一个库,它使用默认的8MB堆栈大小创建一个pthread。 是否有可能以编程方式减少库创建的线程的堆栈大小? 我尝试在我的main()函数中使用setrlimit(RLIMIT_STACK...) ,但这似乎没有任何效果。 ulimit -s似乎可以完成这项工作,但我不想在执行程序之前设置堆栈大小。 任何想法我可以做什么? 谢谢 更新1:似乎我会放弃使用setrlimit(RLIMIT_STACK,...).设置堆栈大小setrlimit(RLIMIT_STACK,...). 我检查了常驻

How to detect the top of the stack in program virtual space

I am trying to estimate the span of my program stack range. My strategy was to assume that since the stack grows downwards, I can create a local variable to the current stack frame and then use its address as a reference. int main() { //Now we are in the main frame. //Define a local variable which would be lying in the top of the stack char a; //Now define another variable in

如何检测程序虚拟空间中堆栈的顶部

我试图估计我的程序堆栈范围的跨度。 我的策略是假设自堆栈向下增长后,我可以为当前堆栈框架创建一个局部变量,然后将其地址用作参考。 int main() { //Now we are in the main frame. //Define a local variable which would be lying in the top of the stack char a; //Now define another variable int b; //address should be lower assuming stack grows downwards //Now estimate the stack siz

Segmentation fault: Stack allocation in a C program in Ubuntu when bufffer>4M

Here's a small program to a college's task: #include <unistd.h> #ifndef BUFFERSIZE #define BUFFERSIZE 1 #endif main() { char buffer[BUFFERSIZE]; int i; int j = BUFFERSIZE; i = read(0, buffer, BUFFERSIZE); while (i>0) { write(1, buffer, i); i = read(0, buffer, BUFFERSIZE); } return 0; } There is a alternative using the stdio.h

分段错误:在buff> 4M时,在Ubuntu的C程序中堆栈分配

这是一个大学任务的小程序: #include <unistd.h> #ifndef BUFFERSIZE #define BUFFERSIZE 1 #endif main() { char buffer[BUFFERSIZE]; int i; int j = BUFFERSIZE; i = read(0, buffer, BUFFERSIZE); while (i>0) { write(1, buffer, i); i = read(0, buffer, BUFFERSIZE); } return 0; } 有一个替代方案使用stdio.h fread和fwrite函数。 好。 我编译了这

Will the stack of a C program ever shrink?

I've noticed that every running C program has a private mapping called [stack] that is initially quite small (128k on my machine), but will grow to accomodate any automatic variables (up to the stack size limit). I assume this is where the call stack of my program is located. However, it doesn't seem to ever shrink back to its original size. Is there any way to free up that memory wit

C程序的堆栈会缩小吗?

我注意到每个正在运行的C程序都有一个名为[stack]的私有映射,它最初非常小(我的机器上为128k),但会增长以适应任何自动变量(达到堆栈大小限制)。 我认为这是我的程序调用堆栈所在的位置。 但是,它似乎并没有缩回到原来的大小。 有没有办法在不终止流程的情况下释放内存? C栈如何在内部实现; 根据需求增加[堆栈]映射的大小? 一些编译器生成的代码,C库或操作系统? 增加在哪里触发? 更新:我在x86-64上使用Li

Segfault occurs on initialization in pthread only

I cannot understand why the following pseudo code is causing a segfault. Using pthreads to run a function I run into a SEGFAULT initializing an integer to zero. When my_threaded_function not in threaded context or if I called function from the main thread there is no issue. The SEGFAULT doesn't occur on initializing rc=0; bu only inside the maze_init function. I have confirmed that I

Segfault仅在pthread初始化时发生

我不明白为什么下面的伪代码导致段错误。 使用pthreads来运行一个函数,我碰到一个SEGFAULT,将一个整数初始化为零。 当my_threaded_function不在线程上下文中或者我从主线程调用函数时,没有问题。 初始化rc=0;不会发生SEGFAULT rc=0; 只在maze_init函数内部。 我已经确认我没有堆栈空间。 但是我不能想到是什么导致函数在pthread内部行为不同(不涉及共享内存),根据gdb无法访问地址&aa 。 为什么堆栈变量的

How memory management happens for process threads in one virtual address space?

I know that threads share code/global data but have different stacks. Each thread has its own stack. I believe there is one virtual address space for each process. It means each thread uses this single virtual address space. I want to know how stack/heap grows in case of multiple threads in the virtual address space? How does OS manages if stack space is full for one thread? In linux the

一个虚拟地址空间中的进程线程如何进行内存管理?

我知道线程共享代码/全局数据,但有不同的堆栈。 每个线程都有自己的堆栈。 我相信每个进程都有一个虚拟地址空间。 这意味着每个线程都使用这个虚拟地址空间。 我想知道如何在虚拟地址空间中有多个线程的情况下堆栈/堆的增长? 如果一个线程的堆栈空间已满,OS如何管理? 在linux中,如果超过了堆栈溢出发生时,堆栈大小由guardsize决定。 程序员负责照顾计算器。 默认的保护值等于系统中定义的页面大小。 事实上,

Segfault with ulimit

On linux, I have a program that crashes only if ulimit -s is set to unlimited. The place where it segfaults is in a connection callback in Libmicrohttpd, so the backtrace is pretty deep (around 10 functions tacked up). Whatever the function I call first in this callback is where it crashes, even if it is just printf. Here is a stacktrace from coredump : #0 0x000000341fa44089 in vfprintf () f

Segfault与ulimit

在Linux上,我有一个程序只有在ulimit -s设置为无限制时才会崩溃。 segfaults的位置在Libmicrohttpd的连接回调中,所以回溯非常深(大约有10个功能)。 无论我在回调中首先调用的函数是哪里崩溃,即使它只是printf。 这是来自coredump的堆栈跟踪: #0 0x000000341fa44089 in vfprintf () from /lib64/libc.so.6 #1 0x000000341fa4ef58 in fprintf () from /lib64/libc.so.6 #2 0x000000000044488d in answer_to_connection

How to use malloc() allocates memory more than RAM in redhat?

System information: Linux version 2.6.32-573.12.1.el6.x86_64 (mockbuild@x86-031.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) ) #1 SMP Mon Nov 23 12:55:32 EST 2015 RAM 48 GB Problem: I want to malloc() 100 GB memory. But it fail to allocate on redhat system. I find that 100GB can be allocated in macOS with 8 GB RAM (clang compile). I am very confuse about

如何使用malloc()在redhat中分配比RAM更多的内存?

系统信息:Linux版本2.6.32-573.12.1.el6.x86_64(mockbuild@x86-031.build.eng.bos.redhat.com)(gcc版本4.4.7 20120313(Red Hat 4.4.7-16)( GCC))#1 SMP Mon Nov 23 12:55:32 EST 2015 RAM 48 GB 问题:我想malloc()100 GB的内存。 但它不能在redhat系统上分配。 我发现100GB可以在8GB RAM的macOS中分配(clang compile)。 我对此很困惑。 也许这个链接中描述了懒惰分配? 为什么malloc()不会在OS X上