Telerik blogs
ECMAScript Goodies_Major _870x220

What are the major features of ES2017 and what's on the list for ES2018? This two-part series explores the latest and greatest features of ECMAScript. Part 1 explores major features like async functions and shared memory and atomics, while Part 2 explores minor features.

Let's check in with ECMA International, Technical Committee 39! It turns out the 6 in ES6 does not stand for the number of years it takes for a release. I kid! Since ES6/ES2015 took so long to release (6 years, hence my jab) the committee decided to move to a yearly small-batch release instead. I'm a big fan of this and I think the momentum keeps things moving and JavaScript improving. What presents did we get for ES2017 and what's on our list for ES2018?

You can learn more about the TC39 process of proposals from 2ality by Dr. Axel Rauschmayer: The TC39 Process for ECMAScript Features.

ES2017

In January, at the TC39 meeting, the group settled on the ECMAScript proposals that would be slated as the features of ES2017 (also referred to ES8, which probably should be nixed to avoid confusion). This list included:

Major features

Minor features

In this post, the first in a two-part series, we'll cover the major features listed above. You can read the second post to cover the minor features.

Async Functions

Async Functions on GitHub (Proposed by Brian Terlson)

I'm starting here because it was first on the list and my level of excitement is pretty high for this nifty addition. In ES2015 we got promises to help us with the all too familiar condition commonly known as… (are you really going to make me say it?) CALLBACK HELL 😱.

The async/await syntax reads entirely synchronously and was inspired by TJ Holowaychuk's Co package. As a quick overview, async and await keywords allow you to use them and try/catch blocks to make functions behave asynchronously. They work like generators but are not translated to Generator Functions. This is what that looks like:

// Old Promise Town  
function fetchThePuppies(puppy) {
  return fetch(puppy)  
  .then(puppyInfo => puppyInfo.text())
  .then(text => {
    return JSON.parse(text)
  })  
  .catch(err =>
    console.log(`Error: ${err.message}`)
  )
}

// New Async/Await City  
async function fetchThePuppies(puppy) {
  try {
    let puppyInfo =  await  fetch(puppy)
    let text =  await puppyInfo.text()
    return  JSON.parse(text)
  }
  catch (err) {
    console.log(`Error: ${err.message}`)
  }
}

This doesn't mean you should go in and replace all promises in your code with async/await. Just like you didn't go in and replace every function in your code with arrow functions (one hopes), only use this syntax where it works best. I won't go too into detail here because there are tons of articles covering async/await. Check them out (yes, I did add a link of a async/await blog post for each of those last words in the previous sentence, you're welcome 😘). In the upcoming year we will see how people are able to make their code more readable and efficient using async/await.

Shared Memory and Atomics

Shared Memory and Atomics on GitHub (Proposed by Lars T. Hansen)

Wait, did we enter a theoretical physics class? Sounds fun, but no. This ECMAScript proposal joined the ES2017 line up and introduces SharedArrayBuffer and a namespace object Atomics with helper functions. Super high-level (pun intended), this proposal is our next step towards high-level parallelism in JavaScript.

We're using JavaScript for more and more operations in the browser relying on Just-in-Time compilers and fast CPUs. Unfortunately, as Lars T. Hansen says in his awesome post, A Taste of JavaScript's New Parallel Primitives from May 2016:

But JS JITs are now improving more slowly, and CPU performance improvement has mostly stalled. Instead of faster CPUs, all consumer devices — from desktop systems to smartphones — now have multiple CPUs (really CPU cores), and except at the low end they usually have more than two. A programmer who wants better performance for her program has to start using multiple cores in parallel. That is not a problem for "native" applications, which are all written in multi-threaded programming languages (Java, Swift, C#, and C++), but it is a problem for JS, which has very limited facilities for running on multiple CPUs (web workers, slow message passing, and few ways to avoid data copying).

SharedArrayBuffer

This proposal provides us with the building blocks for multi-core computation to research different approaches to implement higher-level parallel constructs in JavaScript. What might those building blocks be? May I introduce you to SharedArrayBuffer. MDN has a great succinct definition so I'll just plop that in right here:

The SharedArrayBuffer object is used to represent a generic, fixed-length raw binary data buffer, similar to the ArrayBuffer object, but in a way that they can be used to create views on shared memory. Unlike an ArrayBuffer, a SharedArrayBuffer cannot become detached.

I don't know about you but the first time I read that I was like, "wat."

Basically, one of the first ways we were able to run tasks in parallel was with web workers. Since the workers ran in their own global environments they were unable to share, by default, until communication between the workers, or between workers and the main thread, evolved. The SharedArrayBuffer object allows you to share bytes of data between multiple workers and the main thread. Plus, unlike its predecessor ArrayBuffer, the memory represented by SharedArrayBuffer can be referenced from multiple agents (i.e. web workers or the web page's main program) simultaneously. You can do this using postMessage to transfer the SharedArrayBuffer from one of these agents to the another. Put it all together, and what do you got? Transferring data between multiple workers and the main thread using SharedArrayBuffer so that you can execute multiple tasks at once which == parallelism in JavaScript. But wait, there's more!

SharedArrayBuffer Update

Before we move on it's important to note some current hold-ups for SharedArrayBuffer. If you've been paying attention to the news lately you may be aware of the processor chip security design flaw causing two vulnerabilities: Meltdown and Spectre. Feel free to read up on it but just know that browsers are disabling SharedArrayBuffer until this issue is resolved.

Atomics

Okay, the next stop on this parallel train: Atomics, which is a global variable that has two methods. First, let me present you with the problem the Atomics methods solve. When sharing a SharedArrayBuffer betwixt 🎩 agents (as a reminder agents are the web workers or the web page's main program) each of those agents can read and write to its memory at any time. So, how do you keep this sane and organized, making sure each agent knows to wait for another agent to finish writing their data?

Atomics methods wake and load! Agents will "sleep" in the wait queue while waiting for another agent to finish writing their data, so Atomics.wake is a method that lets them know to wake up. When you need to read the data you use Atomics.load to load data from a certain location. The location is based on the methods two parameters: a TypedArray, an array-like mechanism for accessing raw binary data (what SharedArrayBuffer is using), and an index to find the position in that TypedArray. There is more to it than what we've just covered but that's the gist of it.

For now, Atomics has only these two methods. Eventually, Hansen (our lovely author of this proposal and explainer of parallel things) says, there should be more methods, like store and compareExchange, to truly implement synchronization. Again, we are at the beginning stages of parallelism in JavaScript and this proposal is providing us with the building blocks to get there.

Phew! Although that was quite a lot to think about, that was still a high level overview. This update may not be used by most developers in the next year but will help advance JavaScript to benefit everyone. So, thank your brain for getting you this deep and check out these fantastic resources to dive in more!


Tara Manicsic
About the Author

Tara Z. Manicsic

Tara Z. Manicsic is a lifelong student, teacher, and maker. She has spent her career using JavaScript on both back-end and front-end to create applications. A Developer Advocate for Progress, Google Developer Expert, and international technical speaker, she focuses on conveying the codebase she has learned. She also works in her community launching & directing the Cincinnati Chapter of Women Who Code and the Cincinnati branch of NodeSchool.

Related Posts

Comments

Comments are disabled in preview mode.