Seg Fault on Large Arrays?
Seeing strange behavior for even very small simple integer arrays.
%%cython
import numpy as np
cimport cython
cimport numpy as np
def hi():
DEF MAX = 10000000
cdef int a[MAX],i
cdef int[:] a_mv = a
This crashes, but views into smaller views perform mine. This isn't obvious a memory issue as there's ample RAM for 10 million ints...
As Kevin mentions in his comment, the problem is not the RAM, but the stack. You are allocating an array of 10 million elements on the stack, when you should really allocate it on the heap, using malloc
et friends. Even in C this produces a Segmentation Fault:
/* bigarray.c */
int main(void) {
int array[10000000];
array[5000000] = 1; /* Force linux to allocate memory.*/
return 0;
}
$ gcc -O0 bigarray.c #-O0 to prevent optimizations by the compiler
$ ./a.out
Segmentation fault (core dumped)
While:
/* bigarray2.c */
#include <stdlib.h>
int main(void) {
int *array;
array = malloc(10000000 * sizeof(int));
array[5000000] = 1;
return 0;
}
$ gcc -O0 bigarray2.c
$ ./a.out
$ echo $?
0
链接地址: http://www.djcxy.com/p/81322.html
下一篇: 在大阵列上出现故障?