Native JavaScript Development after Internet Explorer

Share this article

An illustration of a boy and girl with Chrome and Firefox logos playing with new technology at the park while an old man with an IE logo feeds ducks
An illustration of a boy and girl with Chrome and Firefox logos playing with new technology at the park while an old man with an IE logo feeds ducks

Welcome everyone to the third and last part of this series, dedicated to the retirement of oldIE and the changes this event has in the field of front-end development. So far we covered the obsolete techniques that can be safely discarded and the HTML5 and CSS3 properties that now have full native support along the mainstream browsers. Today we will focus on native JavaScript techniques and anything else that didn’t fit in the categories mentioned before.

Once more, a lot of credit goes to CanIUse.com who proved to be an invaluable resource. I will also reiterate my disclaimer from last time:

This article has nothing to do with the decision whether or not to abandon support for oldIE. You and you alone must take that decision based on the specific details of your website or application.

With all this being said, let us proceed!

Key Takeaways

  • Adoption of Modern JavaScript APIs: With Internet Explorer’s retirement, developers can now utilize modern JavaScript APIs like Base64 encoding and Blob constructing directly in mainstream browsers without polyfills, enhancing performance and compatibility.
  • Enhanced Communication Features: The availability of APIs like Channel Messaging and WebSockets in modern browsers facilitates more effective script-to-script communication and persistent connections between the browser and server, respectively.
  • Introduction of ES6 Syntax: The support for ES6 features such as `const` and `let` for block-level variable declarations and the spread of arrow functions allows developers to write cleaner, more efficient code.
  • Security and Privacy Prioritization: The Web Cryptography API and Content Security Policy (CSP) are now fully supported, providing robust tools for enhancing security and user privacy across web applications.
  • Performance Optimization: New capabilities like the Page Visibility API and requestAnimationFrame improve the efficiency of web applications, optimizing resource usage and animation performance.
  • Future-Proofing Web Development: With the discontinuation of Internet Explorer, developers are encouraged to leverage the full potential of HTML5, CSS3, and JavaScript without the constraints of legacy browser compatibility, paving the way for more innovative and forward-compatible web applications.

1. JavaScript APIs

In this section we will go through quite a list of JavaScript features, APIs and functionalities. What do they all have in common? None of them could be really used on the oldIE, requiring either the use of various polyfills or their effect had to be achieved via various other frameworks and libraries (if it could be done at all). In the current context (IE11 + Edge), they have native support built into the browser, meaning they can be used straight out of the box.

Base64 encoding and decoding (btoa and atob)

Base64 is a very useful tool for web. Many of you already used it probably to embed fonts or images into CSS. Another common usage is to handle various resources that would normally interfere with transport protocols. A great example of this is Basic Access Authentication where the username:password pair is packaged using Base64 then sent to the server. Having native support for the encoding/decoding operations means they can be made a lot faster. Here are a few resources to get you started:

Blob constructing

A Binary Large OBject or BLOB is a collection of raw data stored as a single entity in a database management system. It can be an audio clip or an image, stored in Base64 format. Or a collection of images. In many cases, Blob fields are used for data that is not as rigidly structured as to be expressed through a normal table or schema of tables, like a JSON object. Some of you might remember instead the ancestor of the Blob interface, namely the BlobBuilder. This approach has though been deprecated and it is strongly recommended that all manipulation of Blobs happen through the new interface.

On top of that, because this collection is so similar to a file, the native interface for Blob objects has been used as base for the File() interface. As a result, there is one nice feature called “Blob URLs” that allows developers to create URLs for blob objects that can be used anywhere a file could be used. With this in mind, it’s much appreciated that native support covers all the mainstream browsers now.

Channel Messaging

Normally two scripts running in different browser contexts are prohibited to communicate one to another, to avoid a lot of security pitfalls. There are times though when such a communication is not only desired, but is really necessary. This is where the Channel Messaging API comes into play. This interface allows our two scripts to communicate to each other over a two-way pipe. It’s like handing each one a walkie-talkie set on the same channel. Neat, isn’t it?

Constants and block-level variables

const and let are two new ways to define data in ES6. While var defines variables with either global or function scope, the new additions have a block level scope. This means the variables created with const and let are limited in scope to inside of the pair of braces they were defined in.

While let defines a variable that (excepting scope) behaves identical to classic variable, a constant (const) is a read-only reference to a certain value. It can’t be reassigned, it can’t be redefined and it can’t share the same name as any other variable or function in the same scope. The only exception is when the constant is an object with its own attributes. These attributes are not protected from change and behave like normal variables.

This being said, have a look at the proper way to use constants and block-level variables in your code:

Console Logging

Most front-end developers would agree that the web console is one of the most useful tools to have at hand when your scripts are not behaving as they should. Yet Internet Explorer, by its nature, was pretty slow to integrate it into their code, with only version 10 starting to provide full support. Now, with the oldIE retired, there is nothing stopping us from making the most of this feature. And if you need to refresh your knowledge or even to discover new ways to use the console, have a look at the specs below:

Cross-Origin Resource Sharing

Cross-origin resource sharing (CORS) is an HTML5 API that enables the request of resources from outside of its own domain. It describes a set of HTTP headers that permit browsers and servers to request remote resources when a specific permission is granted. The following resources are a good starting point in learning how to use this feature correctly:

Web Cryptography API

Today, security and privacy are two of the most sought after features of any app, which means that good (and fast) cryptography is really appreciated. Now all mainstream browsers have consistent support for the Web Cryptography API, with the exception of IE11 (which implements the old version of the spec) and Safari (which requires the crypto.webkitSubtle prefix). Fortunately, some specific functionalities (like the generation of random values) are better implemented. As a result, it’s easier than ever to implement elements of cryptography with native support. Here are a few guidelines on how to do it properly:

Internationalization

Nowadays the ubiquity of Internet access means that visitors to your websites can come from all around the world. As people are more trusting to things familiar to them, it is a good practice to present all your information both in their language and in a format they are accustomed to. That’s where you need Internationalization (also known as i18n) and Localization (or L10n). Does this sound like gibberish to you? Let’s quote Aurelio De Rosa from his article on How to Implement Internationalization (i18n) in JavaScript:

Internationalization (also known as i18n) is the process of creating or transforming products and services so that they can easily be adapted to specific local languages and cultures. Localization (also known as L10n) is the process of adapting internationalized software for a specific region or language. In other words, internationalization is the process of adapting your software to support multiple cultures (currency format, date format, and so on), while localization is the process of implementing one or more culture.

Browser support is slightly better than what it was at the beginning of the year, with Safari v10 joining the ranks in September. Sounds interesting enough? Here are some resources to put you on the right path:

Handling media queries

Responsive web design is the current standard for performant websites and the key feature that makes it possible is the existence of media queries. matchmedia brings media queries from CSS into JavaScript giving developers a lot more flexibility in optimizing the content for all sort of devices. A good example would be handling the change from portrait to landscape mode and back for mobile phones and tablets. While there is an API that handles detection of device orientation, the support is partial for most browsers, while only Microsoft Edge provides full support. Here are some resources to get you started on this topic:

Media Source Extensions

Media Source Extensions (MSE) add extra functionality to the video and audio elements without using plug-ins. This gives you such things as adaptive media streaming, live streaming, splicing videos, and video editing. YouTube has been using MSE with its HTML5 player since September 2013. Browser support is also quite good, with only iOS Safari and Opera Mini missing this functionality. IE11 has full support only when used on Windows 8+. Unfortunately, IE11/Win7 users are not able to benefit from this technology. Whether you are just curious or really want to start working with this API, you will find the following resources quite useful:

Mutation Observers

JavaScript apps are growing more and more complex every day. As a developer, you need to stay in control over the changes that happen on the page, especially about the times the DOM tree changes or “mutates”. The need for this sort of monitoring is not new and indeed there has already been a solution — mutation events. The problem with this interface is that, as events, they are both synchronous (are fired when called and may prevent other events from firing) and must be captured or bubbled through the DOM. This in turn, can trigger other events overloading the JavaScript thread, and generating, in some special cases, entire cascade failures, causing your browser to crash into pieces.

As a result, mutation events have been deprecated and replaced with mutation observers. What’s the difference, you might ask? First and most importantly, observers are asynchronous. They don’t hold back your scripts from running. Instead of being fired at every mutation, they deliver a batch of results after the main activity is complete. More so, you can fine-tune observers to observe either all changes to a node or just specific categories of changes (like only changes to the list of children or just to the attributes and so on). Start learning how to do it with the following resources:

Page Visibility

Tab browsing changed the way we interact with the web. It’s not uncommon for many people to have dozens of pages open at the same time. Each of these pages does their own thing, runs their scripts, downloads what resources they have and so on. Even though there can be only one tab active at a given time, all open pages are consuming CPU time and bandwidth. Some apps might be sending and receiving updates across the connection on a periodic basis. Yet, if you don’t have that app in the active tab, does it need to update every X seconds in the background? Seems kind of wasteful, doesn’t it? Especially on mobile devices and data plans, where every resource is at a premium.

This is where the Page Visibility API comes into play. This interface allows developers to know whether their app is in an active tab or in the background. Let’s take the case of the app doing updates that I mentioned earlier. With the Page Visibility API you can detect when the app is in the background and instead of doing the updates every 5 or 10 seconds, you do it every 60 seconds or even less. As soon as the page is visible again, you can switch back to the normal rate of updates. Pretty cool, isn’t it?

So what are you waiting for? Have a look at the following guides and start your apps for page visibility. Your users will thank you for it:

Page Transition Events

Did you ever use a web form that, when you tried to move away or close the page, it popped a message saying you have unsaved data? It is pretty common nowadays with pages where you change settings, profile details, etc. How do the scripts in the page know that you want to leave? They listen to the pagehide event.

pagehide and its partner pageshow are the main protagonists of the Page Transition Events. We’ve seen above what the first one is mainly used for. The main use for pageshow is to determine whether a page has been loaded from cache or straight from the server. Not the most common of uses, but, if you have need for either functionality, have a look at the resources below:

requestAnimationFrame

Animation on web has come a long way, from the early days of <marquee> and <blink>, through to animated GIFs, jQuery effects, to current CSS, SVG, canvas and WebGL animations. A constant among all these methods is the need to control the flow of the animation and to make it as smooth as possible.

The initial method used setInterval and setTimeout to control the steps of the animation. The problem is that the results are not reliably consistent and the animation is often rough. That’s why a new interface was conceived — requestAnimationFrame. The main advantage of this approach is that the browser has the freedom to match the requests to its own painting cycles, smoothing down visibly the animations. Together with its counterpart, cancelAnimationFrame, these two methods are the base of modern JavaScript animation.

As usual, below are some resources to get you started in mastering this functionality.

Timing APIs

Online performance is the topic of the day and everyone tries their best to slim down on the resources, to optimize the scripts and make the best use of all their tools at their disposal. There are two main ways to approach this topic: network performance (how fast the site and resources are delivered) and user performance (how fast the application itself performs).

Network performance is serviced by two APIs: Navigation Timing and Resource Timing. Both of them give all sorts of info related to network performance, like DNS, CDN, request and response time, etc. The only difference is that Navigation Timing targets the HTML page itself, while Resource Timing deals with all the other resources (images, scripts, media, etc.)

On the user performance end we have one API: User Timing. This API deals with two main concepts, called Mark (a highly detailed timestamp) and Measure (the interval between two Marks). Tinkering around with these concepts allows developers to measure how fast their code is running and to identify places that require optimization. Unfortunately, this API is still not supported on Safari so a polyfill might be required.

Mastering the use of these interfaces becomes a vital skill in the quest to ensure optimal performance of your website or app. That’s why we’re giving you some materials to start learning:

Typed Arrays

JavaScript typed arrays are array-like objects and provide a mechanism for accessing raw binary data. For maximum flexibility and efficiency, the implementation is split along two concepts: buffers (chunks of raw data) and views (who provide a context where the buffer data can be interpreted). There are a number of web APIs that use typed arrays, such as WebGL, Canvas 2D, XMLHttpRequest2, File, Media Source or Binary WebSockets. If your code deals with such technologies, you might be interested in the resources below:

WebSockets

We talked earlier about Channel Messaging and how it enables two different scripts to communicate directly one to another. WebSockets is similar and a lot more than that. Using this API creates a persistent communication channel between the browser and the web server.

Just like HTTP, the WebSocket protocol has two versions: unsecured (ws://...) and secured (wss://...). It also takes into account proxy servers and firewalls, by opening tunnels through them. In fact, a WebSocket connection starts as a normal HTTP connection, ensuring compatibility with the existing infrastructure.

WebSockets are a fascinating piece of technology (they even have a dedicated website), there is a lot to learn about them. To get you started, here are a few selected resources:

Web Workers

By default, all JavaScript tasks run into the same thread. This means that all scripts in a page have to share the same queue for processing time. That was nice and simple when processors had a single core. But modern CPUs have at least dual cores, reaching out to 4, 6 or 8 on some models. Wouldn’t be nice if some tasks could be moved into separate threads that could be processed by the extra cores available? That’s what Web Workers were invented for.

Using the Web Workers API, a developer can delegate a named script file to a worker that runs in a separate thread. The worker answers only to the script that created it, communicating both ways via messages, can run XMLHttpRequest calls and does not interact with the DOM or some of the default methods and properties of the window object. In the exception category we can mention WebSockets (one can assign the management of the WebSocket connection to a worker) or data storage mechanism like IndexedDB. There’s nothing like having your own minions handling secondary tasks while the main thread focuses on the running the entire app.

To get you started with this functionality (including a list of functions and classes available to web workers), check the resources below:

XMLHttpRequest advanced features

The adoption of XMLHttpRequest heralded a new age in web development. Data could now be exchanged between browser and server without having to reload the entire page. AJAX was the new standard that allowed the existence of one-page applications that everyone loves today.

It is only normal that such a useful technology will be advanced even further. This is how XHR gained new functionality like file uploads, information on the transfer progress or the ability to send form data directly. And all these functionalities (with minor exceptions in the case of IE11 or old versions of Android) are supported on the mainstream browsers after the retirement of oldIE.

For more details, feel free to peruse the following resources:

2. Miscellaneous features

Modern web is not just HTML, CSS and JavaScript. There are many unseen (and unsung) heroes toiling behind the scenes to make our online experience as great as possible. Below, we will discuss several such features that, just as the ones above, couldn’t be used on the oldIE browsers (who were notorious for their security holes and lack of support for modern features).

Non-blocking JavaScript loading with “async” and “defer”

Every web developer learns that scripts are “load-blocking” and will hold the entire page hostage until they finish loading. We all remember the recommendation to load jQuery right before the </body>. Such approach is useless though in the case of single page apps, where all the behavior of the website is driven by JavaScript. Which sends us back to square one.

But the truth is that in most cases your website or app needs just a part of all the JavaScript it loads. The rest will be needed down the road or they are doing things that don’t influence the DOM. The obvious approach is to load only the critical scripts the normal way and the rest in a manner that won’t affect the app in a negative way. And indeed, there are two such loading methods.

The first one is using the defer attribute, used to mark a script that won’t affect the DOM and is meant to be executed after the document has been parsed. In most cases these scripts handle user interactions, making them safe to load this way. The second one uses the async attribute and marks a script that, while loaded in parallel, will execute as soon as it is downloaded. There is no guarantee though that the loading order will be the same as the execution order.

With all the benefits these two arguments bring, they are becoming an important tool in improving the performance of websites and apps. Have a look at the resources below to learn more about how and when to use this technique:

Content Security Policy

From the beginning, the security on the web was built around the model of “same origin”, meaning that only scripts from the same domain can interact with a given page. Over time though, we had to integrate third party scripts into our pages: JavaScript libraries from CDNs, social media widgets from Facebook, Google+ or Twitter and other similar cases. This means that we opened the gates and allowed “guest” scripts to run into our metaphorical courtyards. The problem comes when malicious scripts slither inside as well and are being executed just like the rest – an attack method we all know as Cross-Site Scripting or XSS.

Content Security Policy is the main weapon in the fight against XSS. This mechanism contains a set of policies and directives that specify which scripts are allowed to be executed, from where it can load resources, if it can run inline styles or scripts and so on. CSP is based on whitelisting, meaning that by default everything is denied and only the specified resources can be accessed. This means that, when the rules are fine tuned, even if a malicious script is inserted into our site, it will not be executed.

Here are some resources that will help you understand this mechanism better:

HTTP/2 protocol

From the very beginning, the Web has been running on top of the HTTP protocol. Yet, while the first one had evolved tremendously, HTTP has remained mostly unchanged. In the complex ecosystem of modern websites and applications, HTTP can be a performance bottleneck. Sure, there are techniques and practices that can optimize the process, but there is only so much that can be done.

That’s why a second iteration of the protocol, named HTTP/2, was developed, based on Google’s SPDY protocol. It was approved on February 2015 and the specs were published as RFC 7540 in May 2016. So far the mainstream browsers support HTTP/2 only over encrypted connections and it is highly possible it will remain like this for the foreseeable future to encourage site owners to switch to HTTPS.

Adoption of HTTP/2 is not a simple matter of changing some configuration settings. Some of the best practices used to enhance performance on HTTP can have an impact on the performance over HTTP/2. To find out if your website is ready for HTTP/2, you can consult the resources below:

Resource hints: Prefetching

Web performance is all the craze nowadays and for good reason. As everyone working in the field knows, a good deal of the loading time of a page is taken by the resource download. Wouldn’t it be nice if one could use the time after a page has loaded to preload resources for the next steps? That is exactly what resource hints are for.

Resource hints are a series of directives that tell the browser to make available, ahead of time, specific resources that will be needed later down the road. The list contains five hints, as follows:

  • dns-prefetch
  • preconnect
  • prefetch
  • preload
  • prerender

Out of these five possible options, the only one with good browser support at this moment is prefetch. This hint tells the browser to cache documents that the user is most likely to request after the current page. This limits its use to elements that can be cached. Using it with other types of resources will not work.

If you are interested in this topic, here is a list of resources to provide more details:

Strict Transport Security

HTTPS is becoming the new standard for browsing and more and more websites accept only secured connections. Normal connections (on HTTP) are usually redirected to the HTTPS version and things go on as usual. However, this approach is vulnerable to a “man-in-the-middle” attack where the redirection happens instead to a spoof clone of the website you want (usually a banking site) in order to steal your login credentials.

This is where the Strict Transport Security header comes into play. The first time you connect to the desired website with HTTPS, the header is sent to the browser. The next time you connect, even if you use just the HTTP version of the URL, the browser will go directly to the HTTPS version, without going through the redirect cycle. As no connection is made on HTTP, the attack described earlier can’t happen.

For more details on the Strict Transport Security header, check the following website:

Device Pixel Ratio

Window.devicePixelRatio is a read-only property that returns the ratio of the (vertical) size of one physical pixel on the current display device to the size of one CSS pixel. This way, developers can detect high-density screens (like Retina displays from Apple or high-end Android screens). Used together with Media Queries and MatchMedia (which we discussed above), this property allows delivery of optimized resources for the best possible experience.

Web Video Text Tracks

Web Video Text Tracks (or WebVTT) is a format for marking up text captions for multimedia resources. It is used together with the HTML5 <track> element and enables the presence of subtitles, translations, captions or descriptions to a media resource (audio or video) in a synchronized way. The presence of this textual information makes the media resource a lot more accessible.

For instructions on how to get started with this functionality, check the following resources:

Wrapping Things Up

Here we are, at the end of this series of articles that started as a simple intellectual exercise: “The oldIE is gone! Let’s party! (…hours later…) Now what?”. We covered a broad range of topics, from the techniques and practices that were no longer needed to all the new stuff that we could now do freely without polyfills, be it HTML, CSS or native JavaScript. We even touched on wider topics like performance optimization and enhancing security.

Should you just jump in now and start refactoring all your code? Most probably not. Such a decision must be taken depending on the balance between the cost of refactoring versus the cost of the technological debt. But if you are starting a new project, by all means, build it for the future, not for the past.

Frequently Asked Questions (FAQs) about Native JavaScript Development after Internet Explorer

What is the significance of native JavaScript development after Internet Explorer?

Native JavaScript development after Internet Explorer is crucial because it allows developers to create more efficient and effective web applications. With the discontinuation of Internet Explorer, developers are no longer limited by the constraints and compatibility issues that were often associated with this browser. They can now utilize the full capabilities of JavaScript, including its latest features and updates, to build more dynamic, interactive, and user-friendly web applications.

How can I enable JavaScript in my browser?

Enabling JavaScript in your browser is a straightforward process. For most browsers, you can find the option to enable or disable JavaScript in the settings or preferences menu. Usually, this involves navigating to the ‘Security’ or ‘Privacy’ section and looking for an option related to JavaScript. Make sure to enable it for a better browsing experience.

Is there a difference between JavaScript in Internet Explorer and other browsers?

Yes, there is a significant difference between how JavaScript works in Internet Explorer and other browsers. Internet Explorer had a different JavaScript engine, which often led to compatibility issues and limitations. Modern browsers like Chrome, Firefox, and Safari use more advanced JavaScript engines that support the latest features and standards of JavaScript, leading to better performance and fewer compatibility issues.

What are the benefits of using native JavaScript over libraries or frameworks?

Using native JavaScript has several benefits over using libraries or frameworks. It allows for better performance, as there is no overhead of loading and parsing unnecessary code. It also provides more control over the code, as developers are not limited by the constraints of a specific library or framework. Additionally, understanding and using native JavaScript can lead to a deeper understanding of the language and its capabilities.

How has the discontinuation of Internet Explorer impacted JavaScript development?

The discontinuation of Internet Explorer has had a significant impact on JavaScript development. Developers are no longer required to write additional or different code to ensure compatibility with Internet Explorer. This has led to more efficient development processes and the ability to utilize the full capabilities of JavaScript. It has also resulted in more consistent user experiences across different browsers.

What are some of the latest features of JavaScript that I can use in my development?

JavaScript is constantly being updated with new features and improvements. Some of the latest features include async/await for handling asynchronous operations, spread syntax for expanding arrays or other iterable objects, and arrow functions for more concise function syntax. These features can greatly enhance your JavaScript development and allow you to write more efficient and readable code.

How can I ensure my JavaScript code is compatible with all browsers?

Ensuring browser compatibility is an important aspect of JavaScript development. One way to achieve this is by using feature detection, which involves checking if a specific feature is supported by the user’s browser before using it. Another method is to use polyfills, which are scripts that provide the functionality of newer features in older browsers that do not natively support them.

What is the future of JavaScript development after Internet Explorer?

The future of JavaScript development after Internet Explorer looks promising. With the discontinuation of Internet Explorer, developers can now focus on utilizing the full capabilities of JavaScript without worrying about compatibility issues. This, combined with the continuous updates and improvements to the language, suggests a future where JavaScript development is more efficient, powerful, and versatile.

What are some good resources for learning more about native JavaScript development?

There are many great resources available for learning more about native JavaScript development. Some popular online platforms include Mozilla Developer Network (MDN), freeCodeCamp, and Codecademy. These platforms offer comprehensive guides and tutorials on JavaScript, covering everything from the basics to more advanced topics.

How can I debug JavaScript code effectively?

Debugging is an essential part of JavaScript development. Most modern browsers come with built-in developer tools that can be used for debugging. These tools allow you to step through your code, inspect variables, and see any errors or exceptions that occur. Additionally, using good coding practices, such as writing clear and concise code and commenting your code, can also make the debugging process easier.

Adrian SanduAdrian Sandu
View Author

Adrian is a UX Developer, creator, and speaker living in Iasi, Romania. He believes happiness is the true measure of success and he wants to help other developers achieve their dreams. In the off time, he loves playing video games and tinker with custom PC builds.

internet exploreroldiepatrickc
Share this article
Read Next
Get the freshest news and resources for developers, designers and digital creators in your inbox each week