What happens when a computer program runs?

I know the general theory but I can't fit in the details. I know that a program resides in the secondary memory of a computer. Once the program begins execution it is entirely copied to the RAM. Then the processor retrive a few instructions (it depends on the size of the bus) at a time, puts them in registers and executes them. I also know that a computer program uses two kinds of memor

计算机程序运行时会发生什么?

我知道一般理论,但我无法适应细节。 我知道一个程序驻留在电脑的辅助存储器中。 一旦程序开始执行,它就完全复制到RAM中。 然后,处理器每次检索几条指令(取决于总线的大小),将它们放入寄存器并执行它们。 我也知道一个计算机程序使用两种内存:堆栈和堆,它们也是计算机主内存的一部分。 该堆栈用于非动态内存,动态内存堆(例如,与C ++中new运算符相关的所有内容) 我不明白的是这两件事情是如何连接的。 什么

Performance problems using OpenMP in nested loops

I'm using the following code, which contains an OpenMP parallel for loop nested in another for-loop. Somehow the performance of this code is 4 Times slower than the sequential version (omitting #pragma omp parallel for). Is it possible that OpenMp has to create Threads every time the method is called? In my test it is called 10000 times directly after each other. I heard that sometimes

在嵌套循环中使用OpenMP的性能问题

我正在使用下面的代码,它包含嵌套在另一个for-loop中的OpenMP并行for循环。 不知何故,该代码的性能比顺序版本慢4倍(省略#pragma omp parallel for)。 每次调用方法时,OpenMp是否有可能创建线程? 在我的测试中,它被直接称为10000次。 我听说有时OpenMP会保持线程旋转。 我也尝试设置OMP_WAIT_POLICY=active和GOMP_SPINCOUNT=INFINITE 。 当我删除openMP编译指示时,代码快了大约10倍。 请注意,包含此代码的方法

Programmatically get the cache line size?

All platforms welcome, please specify the platform for your answer. A similar question: How to programmatically get the CPU cache page size in C++? On Linux (with a reasonably recent kernel), you can get this information out of /sys: /sys/devices/system/cpu/cpu0/cache/ This directory has a subdirectory for each level of cache. Each of those directories contains the following files: cohere

以编程方式获取缓存行大小?

欢迎所有平台,请为您的答案指定平台。 一个类似的问题:如何以编程方式获取C ++中的CPU缓存页面大小? 在Linux上(具有合理的最新内核),您可以从/ sys中获取这些信息: /sys/devices/system/cpu/cpu0/cache/ 该目录具有每个缓存级别的子目录。 这些目录中的每一个都包含以下文件: coherency_line_size level number_of_sets physical_line_partition shared_cpu_list shared_cpu_map size type ways_of_associativity

difference between two array declaration methods c++

These of 2 of the probably many ways of declaring arrays (and allocating memory for them) in c++ 1. int a[3]; 2. int *b = new int[3]; I want to understand how c++ is treating the two differently. a. In both cases, i can access array with the following syntax: a[1] and b[1] b. When i try cout<< a and cout<< b , both print the addresses of first element of respective arrays.

两个数组声明方法c ++之间的区别

这些可能有很多种方法在c ++中声明数组(并为它们分配内存) 1. int a[3]; 2. int *b = new int[3]; 我想了解c ++如何以不同的方式对待它们。 一个。 在这两种情况下,我可以使用以下语法访问数组: a[1]和b[1] 湾 当我尝试cout<< a和cout<< b ,都会打印各个数组的第一个元素的地址。 它看起来好像将a和b都视为指向数组的第一个元素的指针。 C。 但奇怪的是,当我尝试cout << sizeof(a)和sizeo

Resizing std::vector without destroying elements

I am using all the time the same std::vector<int> in order to try to avoid allocating an deallocating all the time. In a few lines, my code is as follows: std::vector<int> myVector; myVector.reserve(4); for (int i = 0; i < 100; ++i) { fillVector(myVector); //use of myVector //.... myVector.resize(0); } In each for iteration, myVector will be filled with up to 4

在不破坏元素的情况下调整std :: vector的大小

我一直在使用相同的std::vector<int>来试图避免分配一个释放所有的时间。 在几行中,我的代码如下所示: std::vector<int> myVector; myVector.reserve(4); for (int i = 0; i < 100; ++i) { fillVector(myVector); //use of myVector //.... myVector.resize(0); } 在每个迭代中, myVector将被填充最多4个元素。 为了生成高效的代码,我想总是使用myVector 。 但是,在myVector.resize()

when memory will be released?

I have created a code block, like this. proc() { Z* z = new Z(); } now the pointer declared inside method proc will have scope only till proc. I want to ask when the DTOR for z will be called automatically. whether when the controls come out of the method proc or when my application is closed. The destructor will not be called at all. The memory used by *z will be leaked until the appl

当内存将被释放?

我已经创建了一个代码块,就像这样。 proc() { Z* z = new Z(); } 现在在方法proc中声明的指针将只有proc才有作用域。 我想问什么时候z的DTOR将被自动调用。 无论何时控件从方法proc出来或当我的应用程序关闭时。 析构函数根本不会被调用。 *z使用的内存将被泄漏,直到应用程序关闭(此时操作系统将回收您的进程使用的所有内存)。 为了避免泄漏,你必须在某个时候调用delete ,或者更好的方法是使用智能指针。

What is slower about dynamic memory usage?

This question already has an answer here: Which is faster: Stack allocation or Heap allocation 23 answers Caching issues aside, the CPU stack is just that, a stack, a LIFO list/queue. You remove things from it in the exactly opposite order from the one you put them there. You do not create holes in it by removing something in the middle of it. This makes its management extremely trivial:

什么是动态内存使用较慢?

这个问题在这里已经有了答案: 哪个更快:堆栈分配或堆分配23个答案 抛开缓存问题,CPU堆栈就是这样一个栈,一个LIFO列表/队列。 按照与放置在那里的顺序完全相反的顺序从中移除它。 你不要通过去除它中间的东西来创建洞。 这使得其管理极其微不足道: memory[--stackpointer] = value; // push value = memory[stackpointer++]; // pop 或者你可以分配一个大块: stackpointer -= size; // allocate memset(&memor

What is more efficient stack memory or heap?

Possible Duplicate: C++ Which is faster: Stack allocation or Heap allocation What is more efficient from memory allocation perspective - stack memory or heap memory? What it depends on? Obviously there is an overhead of dynamic allocation versus allocation on the stack. Using heap involves finding a location where the memory can be allocated and maintaining structures. On the stack it is

什么是更高效的堆栈内存或堆?

可能重复: C ++更快:堆栈分配或堆分配 从内存分配角度来看什么更有效率 - 堆栈内存还是堆内存? 它取决于什么? 很显然,动态分配与堆栈上的分配有关。 使用堆涉及找到内存可以分配的位置和维护结构。 在堆栈中,它很简单,因为您已经知道放置元素的位置。 我想了解在允许动态分配的支持结构的最坏情况下以毫秒为单位的开销是多少? 堆栈通常速度更高效,并且易于实现! 我倾向于同意来自软件网站Joel的 Michael

object created in function, is it saved on stack or on heap?

I am using c++ specifically: when I create an object in a function, will this object be saved on the stack or on the heap? reason I am asking is since I need to save a pointer to an object, and the only place the object can be created is within functions, so if I have a pointer to that object and the method finishes, the pointer might be pointing to garbage after. --> if I add a pointer to

在函数中创建的对象,是保存在堆栈还是堆上?

我特别使用c ++:当我在一个函数中创建一个对象时,这个对象会保存在堆栈还是堆上? 我所要求的理由是因为我需要保存一个指向对象的指针,并且唯一可以创建的对象位于函数内,所以如果我有一个指向该对象的指针并且方法结束,指针可能指向垃圾后。 - >如果我将一个指向对象的指针添加到列表(这是类的成员),然后方法结束,我可能会让列表中的元素指向垃圾。 所以再次 - 当方法中创建对象时,它会保存在堆栈中(在函数

Understanding Stack, Heap and Memory Management

int *ip = new int[10]; for (int i = 0; i<10; i++) *(ip+i) = i; myfun(ip); // assume that myfun takes an argument of // type int* and returns no result delete [] ip; The above code is a small segment of a test function that I am trying to use to learn about the stack and the heap. I am not fully sure what the correct sequence is. This is what I have thus far: When the

了解堆栈,堆和内存管理

int *ip = new int[10]; for (int i = 0; i<10; i++) *(ip+i) = i; myfun(ip); // assume that myfun takes an argument of // type int* and returns no result delete [] ip; 上面的代码是我试图用来了解堆栈和堆的测试函数的一小部分。 我不完全确定正确的顺序是什么。 这是我到目前为止: 当指针ip被创建时,它指向由于“new”声明而在堆上创建的大小为10的新int数组。 0-9被添加到数组0-9。