Advantages of using immutable.js over Object.assign or spread operators
So far most of "starter boilerplates" and some posts about react / redux I've seen encourage usage of immutable.js to address mutability. I personally rely on Object.assign
or spread operators to handle this, hence don't really see advantage in immutable.js as it adds additional learning and shifts a bit from vanilla js techniques used for mutability. I was trying to find valid reasons for a switch, but wasn't able to hence I am asking here to see why it is so popular.
This is all about efficiency.
Persistent Data Structures
A persistent data structure keeps previous versions of itself when it is mutated by always yielding a new data structure. To avoid expensive cloning only the difference to the previous data structure is stored, whereas the intersection is shared between them. This strategy is called structural sharing. Hence persistent data structures are much more efficient then cloning with Object.assign
or the spread operator.
Drawbacks of persistent data structures in Javascript
Unfortunately Javascript doesn't support persistent data structures natively. That is the reason immutable.js exists and that its objects differ greatly from plain old Javascript Object
s. This leads to more verbose code and a lot of conversions of persistent data structures to native Javascript data structures.
The crucial question
When does the benefits of immutable.js's structural sharing (efficiency) exceed its disadvantages (verbosity, conversions)?
I guess the library pays off only in large projects with numerous and extensive objects and collections, when cloning of whole data structures and garbage collection gets more expensive.
I have created the performance benchmarks for multiple immutable libraries, the script and results are located inside the immutable-assign (GitHub project), which shows that immutable.js is optimized for write operations, faster than Object.assign(), however, it is slower for read operations. Following are the summary of the benchmarks results:
-- Mutable
Total elapsed = 50 ms (read) + 53 ms (write) = 103 ms.
-- Immutable (Object.assign)
Total elapsed = 50 ms (read) + 2149 ms (write) = 2199 ms.
-- Immutable (immutable.js)
Total elapsed = 638 ms (read) + 1052 ms (write) = 1690 ms.
-- Immutable (seamless-immutable)
Total elapsed = 31 ms (read) + 91302 ms (write) = 91333 ms.
-- Immutable (immutable-assign (created by me))
Total elapsed = 50 ms (read) + 2173 ms (write) = 2223 ms.
Therefore, whether to use immutable.js or not will depend on the type of your application, and its read to write ratio. If you have lots of write operations, then immutable.js will be a good option.
Premature optimization is the root of all evil
Ideally, you should profile your application before introducing any performance optimization, however, immutability is one of those design decision must be decided early. When you start using immutable.js, you need to use it throughout your entire application to get the performance benefits, because interop with plain JS objects using fromJS() and toJS() is very costly.
I think the main advantage of ImmutableJs is in its data structures and speed. Sure, it also enforces immutability, but you should be doing that anyways, so that's just an added benefit.
For example, say you have a very large object in your reducer and you want to change a very small part of that object. Because of immutability, you can't change the object directly, but you must create a copy of the object. You do that by copying everything (in ES6 using the spread operator).
So what's the problem? Copying very large objects is very slow. Data structures in ImmutableJS do something called structural sharing where you really only change the data you want. The other data that you aren't changing is shared between the objects, so it doesn't get copied.
The result of this are highly efficient data structures, with fast writes.
ImmutableJS also offers easy comparisons for deep objects. For example
const a = Immutable.Map({ a: Immutable.Map({ a: 'a', b: 'b'}), b: 'b'});
const b = Immutable.Map({ a: Immutable.Map({ a: 'a', b: 'b'}), b: 'b'});
console.log(a.equals(b)) // true
Without this, you'd need some sort of deep comparison function, that would also take a lot of time, whereas the root nodes here contain a hash of the entire datastructure (don't quote me on this, this is how I remember it, but the comparisons are always instant), so comparisons are always O(1)
ie instant, regardless of object size.
This can be especially useful in the React shouldComponentUpdate
method, where you can just compare the props using this equals
function, which runs instantaneously.
Of course, there are also downsides, if you mix immutablejs structures and regular objects, it can be hard to tell what's what. Also your codebase is littered with the immutableJS syntax, which is different from regular Javascript.
Another downside is that if you aren't going to use deeply nested objects, it is going to be a bit slower than plain old js, since the data structures do have some overhead.
Just my 2 cents.
链接地址: http://www.djcxy.com/p/39570.html上一篇: 右移2进行分割