I'm trying to get SIZE_MAX in C89. I thought of the following way to find SIZE_MAX : const size_t SIZE_MAX = -1; Since the standard (§6.2.1.2 ANSI C) says: When a signed integer is converted to an unsigned integer with equal or greater size, if the value of the signed integer is nonnegative, its value is unchanged. Otherwise: if the unsigned integer has greater size, the signed integer
我试图在C89中获得SIZE_MAX 。 我想通过以下方式找到SIZE_MAX : const size_t SIZE_MAX = -1; 由于标准(§6.2.1.2ANSI C)说: 当一个有符号整数被转换为一个无符号整数,其大小相等或更大时,如果有符号整数的值是非负的,则其值不变。 否则:如果无符号整数具有更大的大小,则有符号整数首先被提升为与无符号整数相对应的有符号整数; 通过向无符号整数类型 28中表示的最大数量添加一个值,将该值转换为无符号 用脚
I have started programming practice on codechef and have been confused by the difference between C and C99. What does C mean here? Is it C89? Check the languages at the bottom of this submit. It contains both C and C99. I found on the internet something called GNU C. Is there a different C for linux/unix systems? Are these compliant to the C standards by ANSI? I have also read in some pla
我已经开始对codechef进行编程练习,并且被C和C99之间的差异弄糊涂了。 C是什么意思? 它是C89吗? 检查本次提交底部的语言。 它包含C和C99。 我在互联网上发现了一些名为GNU C的东西。对于linux / unix系统有没有不同的C语言? ANSI是否符合C标准? 我也在一些地方读过“C99严格”。 这是什么? 还有其他不同的C标准在使用吗? 有没有叫做C 4.3.2或者它是目前使用的gcc版本? 编辑: 这,这,这有帮助。 我会搜
The problem is simple. As I understand, GCC maintains that chars will be byte-aligned and ints 4-byte-aligned in a 32-bit environment. I am also aware of C99 standard 6.3.2.3 which says that casting between misaligned pointer-types results in undefined operations. What do the other standards of C say about this? There are also many experienced coders here - any view on this will be appreciate
问题很简单。 据我所知,GCC认为chars将在32位环境中以字节对齐并且ints以4字节对齐。 我也知道C99标准6.3.2.3,它指出在未对齐的指针类型之间进行转换会导致未定义的操作。 C的其他标准对此有何评论? 这里也有许多经验丰富的编码员 - 任何关于此的观点都将被赞赏。 int *iptr1, *iptr2; char *cptr1, *cptr2; iptr1 = (int *) cptr1; cptr2 = (char *) iptr2; C只有一个标准(ISO),两个版本(1989和1999)以及一些相
I'm currently deciding on a platform to build a scientific computational product on, and am deciding on either C#, Java, or plain C with Intel compiler on Core2 Quad CPU's. It's mostly integer arithmetic. My benchmarks so far show Java and C are about on par with each other, and .NET/C# trails by about 5%- however a number of my coworkers are claiming that .NET with the right optim
目前,我正在决定建立一个科学计算产品的平台,并决定使用Core2 Quad CPU上的Intel编译器的C#,Java或普通C语言。 它主要是整数算术。 到目前为止,我的基准测试显示Java和C大致相同,而.NET / C#则落后了大约5% - 然而,我的许多同事都声称,如果.NET具有正确的优化,足够的时间将会胜过这两者让JIT去做它的工作。 我总是认为JIT会在应用程序启动后几分钟内完成它的工作(可能在我的情况下几秒钟,因为它大多是紧密的循
It seems like optimization is a lost art these days. Wasn't there a time when all programmers squeezed every ounce of efficiency from their code? Often doing so while walking five miles in the snow? In the spirit of bringing back a lost art, what are some tips that you know of for simple (or perhaps complex) changes to optimize C#/.NET code? Since it's such a broad thing that depends
这些日子看来,优化似乎是一种失落的艺术。 是不是所有程序员都从代码中挤出每盎司的效率? 在雪地里行走五英里时经常这样做? 本着恢复丢失的艺术的精神,你知道一些关于优化C#/ .NET代码的简单(或者可能是复杂的)更改的提示? 由于它是如此广泛的事情,取决于人们想要完成的事情,它将有助于为您的提示提供背景信息。 例如: 将多个字符串连接在一起时,请使用StringBuilder 。 请参阅底部的链接以了解关于此的警
I need to find a bottleneck and need to accurately as possible measure time. Is the following code snippet the best way to measure the performance? DateTime startTime = DateTime.Now; // Some execution process DateTime endTime = DateTime.Now; TimeSpan totalTimeTaken = endTime.Subtract(startTime); No, it's not. Use the Stopwatch (in System.Diagnostics ) Stopwatch sw = Stopwatch.StartNew
我需要找到一个瓶颈,并且需要尽可能准确地衡量时间。 下面的代码片段是衡量性能的最佳方式吗? DateTime startTime = DateTime.Now; // Some execution process DateTime endTime = DateTime.Now; TimeSpan totalTimeTaken = endTime.Subtract(startTime); 不,这不对。 使用秒表(在System.Diagnostics ) Stopwatch sw = Stopwatch.StartNew(); PerformWork(); sw.Stop(); Console.WriteLine("Time taken: {0}ms", sw
I have a C program that calls a function pi_calcPiItem() 600000000 times through the function pi_calcPiBlock . So to analyze the time spent in the functions I used GNU gprof. The result seems to be erroneous since all calls are attributed to main() instead. Furthermore the call graph does not make any sense: Each sample counts as 0.01 seconds. % cumulative self self tot
我有一个调用一个函数C程序pi_calcPiItem()通过函数6亿次pi_calcPiBlock 。 所以要分析在我使用GNU gprof的函数中花费的时间。 结果似乎是错误的,因为所有调用都归属于main() 。 此外,调用图没有任何意义: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls Ts/call Ts/call name 61.29 9.28 9.28
When I run gprof on my C program it says no time accumulated for my program and shows 0 time for all function calls. However it does count the function calls. How do I modify my program so that gprof will be able to count how much time something takes to run? Did you specify -pg when compiling? http://sourceware.org/binutils/docs-2.20/gprof/Compiling.html#Compiling Once it is compiled, y
当我在我的C程序上运行gprof时,它表示没有为我的程序累积时间,并显示所有函数调用都为0。 但它确实计数了函数调用。 如何修改我的程序,以便gprof能够计算需要运行多少时间? 编译时是否指定了-pg? http://sourceware.org/binutils/docs-2.20/gprof/Compiling.html#Compiling 编译完成后,运行该程序,然后在二进制文件上运行gprof。 例如: test.c的: #include <stdio.h> int main () { int i;
您使用.net程序时使用了哪些性能分析器,您会特别推荐哪些性能分析器? I have used JetBrains dotTrace and Redgate ANTS extensively. They are fairly similar in features and price. They both offer useful performance profiling and quite basic memory profiling. dotTrace integrates with Resharper, which is really convenient, as you can profile the performance of a unit test with one click from the IDE
您使用.net程序时使用了哪些性能分析器,您会特别推荐哪些性能分析器? 我广泛使用了JetBrains dotTrace和Redgate ANTS。 它们在功能和价格上非常相似。 它们都提供了有用的性能分析和相当基本的内存分析。 dotTrace与Resharper集成在一起,这非常方便,因为您可以通过单击IDE从单元测试中分析性能。 然而,dotTrace通常似乎会产生虚假结果(例如说一种方法需要花费几年才能运行) 我更喜欢ANTS呈现分析结果的方式。 它
We're developing a multithreaded project. My colleague said that gprof works perfectly with no work around with multithreaded programs. I read otherwise some time ago. http://sam.zoy.org/writings/programming/gprof.html http://lists.gnu.org/archive/html/bug-binutils/2010-05/msg00029.html I also read this: How to profile multi-threaded C++ application on Linux? So I'm guessing t
我们正在开发一个多线程项目。 我的同事说,gprof完美地工作,没有多线程程序的工作。 我前段时间阅读过。 http://sam.zoy.org/writings/programming/gprof.html http://lists.gnu.org/archive/html/bug-binutils/2010-05/msg00029.html 我也读过这个: 如何在Linux上分析多线程C ++应用程序? 所以我猜测解决方法不再需要? 如果是这样,那么什么时候不需要? 除非您更改处理,否则gprof会正常工作。 使用协处