Why Is Dynamic Typing So Often Associated with Interpreted Languages?
Simple question folks: I do a lot of programming (professionally and personally) in compiled languages like C++/Java and in interpreted languages like Python/Javascript. I personally find that my code is almost always more robust when I program in statically typed languages. However, almost every interpreted language I encounter uses dynamic typing (PHP, Perl, Python, etc.). I know why compiled languages use static typing (most of the time), but I can't figure out the aversion to static typing in interpreted language design.
Why the steep disconnect? Is it part of the nature of interpreted languages? OOP?
Interesting question. BTW, I'm the author/maintainer of phc (compiler for PHP), and am doing my PhD on compilers for dynamic languages, so I hope I can offer some insights.
I think there is a mistaken assumption here. The authors of PHP, Perl, Python, Ruby, Lua, etc didn't design "interpreted languages", they designed dynamic languages, and implemented them using interpreters. They did this because interpreters are much much easier to write than compilers.
Java's first implementation was interpreted, and it is a statically typed language. Interpreters do exist for static languages: Haskell and OCaml both have interpreters, and there used to be a popular interpreter for C, but that was a long time ago. They are popular because they allow a REPL, which can make development easier.
That said, there is an aversion to static typing in the dynamic language community, as you'd expect. They believe that the static type systems provided by C, C++ and Java are verbose, and not worth the effort. I think I agree with this to a certain extent. Programming in Python is far more fun than C++.
To address the points of others:
dlamblin says: "I never strongly felt that there was anything special about compilation vs interpretation that suggested dynamic over static typing." Well, you're very wrong there. Compilation of dynamic languages is very difficult. There is mostly the eval
statement to consider, which is used extensively in Javascript and Ruby. phc compiles PHP ahead-of-time, but we still need a run-time interpreter to handle eval
s. eval
also can't be analysed statically in an optimizing compiler, though there is a cool technique if you don't need soundness.
To damblin's response to Andrew Hare: you could of course perform static analysis in an interpreter, and find errors before run-time, which is exactly what Haskell's ghci
does. I expect that the style of interpreter used in functional languages requires this. dlamblin is of course right to say that the analysis is not part of interpretation.
Andrew Hare's answer is predicated on the questioners wrong assumption, and similarly has things the wrong way around. However, he raises an interesting question: "how hard is static analysis of dynamic languages?". Very very hard. Basically, you'll get a PhD for describing how it works, which is exactly what I'm doing. Also see the previous point.
The most correct answer so far is that of Ivo Wetzel. However, the points he describes can be handled at run-time in a compiler, and many compilers exist for Lisp and Scheme that have this type of dynamic binding. But, yes, its tricky.
Interpreted languages use dynamic typing because there is no compilation step in which to do the static analysis. Compiled languages do static analysis at compilation time which means that any type errors are reported to the developer as they work.
It is easier to understand if you consider that a statically typed language has a compiler that enforces type rules outside the context of execution. Interpreted languages are never analyzed statically so type rules must be enforced by the interpreter within the context of execution.
I think it's because of the nature of interpreted languages, they want to be dynamic, so you CAN change things at runtime. Due to this a compiler never exactly knows what's the state of the program after the next line of code has been excecuted.
Imagine the following scenario(in Python):
import random
foo = 1
def doSomeStuffWithFoo():
global foo
foo = random.randint(0, 1)
def asign():
global foo
if foo == 1:
return 20
else:
return "Test"
def toBeStaticallyAnalyzed():
myValue = asign()
# A "Compiler" may throw an error here because foo == 0, but at runtime foo maybe 1, so the compiler would be wrong with its assumption
myValue += 20
doSomeStuffWithFoo() # Foo could be 1 or 0 now... or 4 ;)
toBeStaticallyAnalyzed()
As you can hopefully see, a compiler wouldn't make any sense in this situation. Acutally it could warn you about the possibility that "myValue" maybe something else than a Number. But then in JavaScript that would fail because if "myValue" is a String, 20 would be implictily converted to a String too, hence no error would occur. So you might get thousands of useless warnings all over the place, and i don't think that that is the intend of a compiler.
Flexibility always comes with a price, you need to take a deeper look at your program, or program it more carefully, in other words you are the COMPILER in situations like the above.
So your solution as the compiler? - Fix it with a "try: except" :)
链接地址: http://www.djcxy.com/p/72306.html上一篇: 静态/动态与强/弱
下一篇: 为什么动态打字通常与口译语言相关联?