Fair point, we could answer that more directly on the site. Besides the comparison were there other things that make it seem oriented to people already familiar with it?
Generally, the video tag is great and has come a very long way from when Video.js was first created. If the way you think about video is basically an image with a play button, then the video tag works well. If at some point you need Video.js, it'll become obvious pretty quick. Notable differences include:
* Consistent, stylable controls across browsers (browsers each change their native controls over time)
* Advanced features like analytics, ABR, ads, DRM, 360 video (not all of those are in the new version yet)
* Configurable features (with browsers UIs you mostly get what you get)
* A common API to many streaming formats (mp4/mp3, HLS, DASH) and services (Youtube, Vimeo, Wistia)
Of course many of those things are doable with the video tag itself, because (aside from the iframe players) video.js uses the video tag under the hood. But to add those features you're going to end up building something like video.js.
Part of what makes AI useful to me is getting though the layers of "what the hell is this, exactly" that slow you down when you jump more than one level beyond your domain knowledge. I think every knowledge container (document, website, what have you) should have a "what the hell is this" link /rich tooltip /accordion section /whatever by default.
Of course, AI explanations often also fail at this unless you give them "ELI5" or other relevant prompting (I'm looking at you Perplexity).
I’m not sure which user we’re talking about, but it’s up to the video.js user to decide if and when they use ads. Just like it’s up to YouTube. Video can get expensive, so some video wouldn’t exist without some form of monetization.
In this case, you're talking about the browser user, and not the dev user of video.js, but I feel like you know this and are just trying to rail against ads in a manner that's just not relevant.
If someone providing video content wants to run ads as part of making the video available to you, that's up to them. It's also up to you if you want to attempt to view the video without those ads or skip watching altogether. But to the dev of video.js, you're personal choices of consuming AVOD content are irrelevant.
it just doesn’t work in every environment. every browser version has it’s own issues and edge cases. If you need stable video player or want streaming features you should use it.
P.S i built movie streaming and tv broadcasting player for country of Georgia and supported environments from 2009 LG Smart TVs to modern browsers.
you think it’s solid until you want customization and old browser support. it should work fine if you just want to autoplay a small size mp4 file on mute
In case anyone's wondering, this website's syntax highlighting color scheme is called "gruvbox", which I quite like but took an embarrassingly long time to track down
Probably not base case but a quick test to replace my audio player (currently using Plyr) turned up the following gaps for me, at least with the out-of-the-box code.
1. No playback rates under 1
2. No volume rocker on mobile
3. Would appreciate having seek buttons on mobile too
4. No (easily apparent) way to add an accent color, stuck with boring monochrome
5. Docs lacked clear example/demo/playground so I wasn't sure what it would look like until implemented
All solid feedback, thanks! I'm making sure these get captured as issues. Otherwise we're closely tracking feature parity with Plyr (and other players) and our goal is to have full parity by GA, aiming for the middle of the year.
- On Mac with Increase Contrast turned on in accessibility settings the control bar ends up being white-on-light-grey
- When focusing the volume control with a keyboard, you can only mute or un-mute, not use up or down to adjust the volume. To do that you have to tab again into the volume slider field
- Don’t seem to be able to enter picture-in-picture mode with the keyboard
- Purely from a first class citizen point of view, it’d be nice to have all the accessibility options (transcripts, etc) shown in the homepage demo
I'm not familiar with video hosting but have played with html5 video player but I have this question: on the servers side, do I have to host a specific endpoint that serves chunks of video? Lets say I take 720p video @ 800mb and I chunk it into 2mb pieces with ffmpeg. So I have a folder somewhere (webserver, cdn, blob storage) with the original 4K video, then generate downscaled versions for 1440p, 1080p, 720p, so I end up with 4 large files, and then for each of those, I chunk them into reasonable sizes that aligns with bitrates / key frames. And then some thumbnail generation. Any advise on what the "best" way would be to chunk/host video files so that videojs runs the best and smoothest? I feel that I should build a very lean/fast chunk & thumbnail server, just one or two endpoints. Or is it best to let the webserver do the lifting? Or off-the-shelf media servers (like in the self-hosting community)?
Just convert it to HLS, which is naturally chunked at 1-2 second intervals, and serve all the pieces from nginix. No dynamic content needed. I do this with videojs and it works great. Added bonus of HLS is that my LG TV supports it natively from <video> tags.
If you don't need to switch versions at runtime (ABR), you don't even need to chunk it manully. Your server has to support range requests and then the browser does the reasonable thing automatically.
The simplest option is to use some basic object storage service and it'll usually work well out of the box (I use DO Spaces with built-in CDN, that's basically it).
Yes, serving an MP4 file directly into a <video> tag is the simplest possible thing you can do that works. With one important caveat: you need to move the "MOOV" metadata to the front of the file. There are various utilities for doing that.
It's not quite as simple as that because the chunks should be self-contained; they need to start with an IDR keyframe, which fully resets the decoder. That allows the player to seek to the start of any chunk.
That means when you're encoding the downscaled variants, the encoder wants to know the size of the file segments so it can insert those IDR frames. Therefore it's common to do the encoding and segmentation in a single step (e.g. with ffmpeg's "dash" formatter).
You can have variable-duration or fixed-duration segments. Supposedly some decoders are happier with fixed-duration segments, but it can be fiddly to get the ffmpeg settings just right, especially if you want the audio and video to have exactly the same segment size (here's a useful little calculator for that: https://anton.lindstrom.io/gop-size-calculator/)
For hosting, a typical setup would be to start with a single high-quality video file, have an encoder/segmenter pipeline that generates a bunch of video and audio chunks and DASH (.mpd) and/or HLS (.m3u8) manifests, and put all the chunks and manifests on S3 or similar. As long as all the internal links are relative they can be placed anywhere. The video player will start with the top-level manifest URL and locate everything else it needs from there.
Why does the bundle size matters when playing 3MB+ videos anyway? Curious how I could integrate one of these players without polluting my bundle with duplicates :)
Just want to say, thanks for the comprehensive blog post and not treating the reader like children. You did a great job explaining the differences & changes. I wish more product/project releases were done this well.
Ah...you're scratching at some scabs with this totally reasonable question.
We learned some tough lessons with media-chrome[1] and Mux Player, where we tried to just write web components. The React side of things was a bit of a thorn, so we created React shims that provided a more idiomatic React experience and rendered the web components...which was mostly fine, but created a new set of issues. The reason we chose web components was to not have to write framework-specific code, and then we found ourselves doing both anyway.
With VJS 10 I think we've landed on a pretty reasonable middle ground. The core library is "headless," and then the rendering layer sits on top of it. Benefit is true React components and nice web components.
Web components sound neat until you try to make styling and SSR behave across a mess of app setups, and then you're burning time on shadow DOM quirks, hydration bugs, and framework glue instead of the player itself. Most users do not care. A plain JS lib with a decent API is easier to drop into an old stack, easier to debug, and less likely to turn us into free support for someone's anicent admin panel.
Is it not a web component, per se? Per the article, all the React stuff does seem to bake down to HTML Custom Elements, that get wired up by some client-side JS registering for them. That client-side JS is still a "web component", even if it's embedded inside React SPA code bundle, no?
If you mean "why do I need React / any kind of bundling; why can't I just include the minified video.js library as a script tag / ES6 module import?" — I'm guessing you can, but nobody should really want to, since half the point here is that the player JS that registers to back the custom elements, is now way smaller, because it's getting tree-shaken down to just the JS required to back the particular combination of custom elements that you happen to use on your site. And doing that requires that, at "compile time", the tree-shaking logic can understand the references from your views into the components of the player library. That's currently possible when your view is React components, but not yet possible (AFAIK) when your view is ordinary HTML containing HTML Custom Elements.
I guess you could say, if you want to think of it this way, that your buildscript / asset pipeline here ends up acting as a web-component factory to generate the final custom-tailored web-component for your website?
Congrats Steve! I haven't touched video since I was at JW Player a million years ago, but I always inspired by the simplicity of video.js (especially the theming).
Hope this new iteration is exceptionally successful.
Oh hi Zach! Blast from the past. Hope you’re doing well and thanks for the well wishes. Always enjoyed chatting you and the JW team at FOMS and conferences. The water’s warm back here in video tech if you ever want to jump back in!
So fun seeing all these familiar names pop up in a single thread, haven't been active in video after leaving Kaltura but have fond memories of FOMS/FOSDEM and meeting all of you!
Sibling comment didn't elaborate, but I think they might be onto something.
It happened to me personally - LLMs and agentic coding tools enabled me to pick up old side projects and actually finish them. Some of these projects were in the drawer for years, and when Sonnet 4 released I gave them another try and got up to speed really quickly. I suspect this happened to many developers.
Something AI has done for Video.js is allow us to set our sights higher, with the about the same size team. Specifically aiming for idiomatic components and patterns for each popular JS framework (React, Svelte, Vue, React Native), not just web component wrappers (though I still love web components on their own).
Absolutely the case for me. Small fun projects that would take a few hours to round off a feature can now be done in an hour. Why wouldn't i finish it off?
I see you promoting that the looks are consistent across browsers. I've seen several other video players that are browser dependent because of particular JS features used. Are future features going to remain browser agnostic?
Can you give an example of a player/feature combo where this is the case? For general player features there's not really an excuse for only working in one browser, but features like Casting can be browser dependent because the browser has to expose that functionality. Other interesting prototypes rely on a new API called Web Codecs that isn't fully supported everywhere.
In the core JS of Video.js v10 we're building without the assumption of there even being a browser involved so we can point to future JS-based platforms like React Native.
There have been other video players that attempt to be more "professional" adding things like audio meters and other tools. These are using parts of JS that is not available anywhere except in Chrome. After digging in, there's a lot of audio related things that Chrome is trying to do that FF/Safari are not. We have mp4 files with multiple audio streams that Chrome exposes ability to select from while FF/Safari do not. Creating waveforms on the fly from a video source also becomes problematic.
We've also had issues getting frame accuracy when navigating the video stream. There's some sort of "security" that randomizes/rounds the returned value of currentTime that I cannot wrap my head around as how that is security related. Lots of effort spent on getting stock HTML5 video element to be frame accurate.
In the works! You should be able to use the existing <youtube-video-element> [1] with the HTML side of v10 today, but we're working on porting over the other media elements into the new architecture for better React support.
I would also be interested in this. Subtitle presentation is something where browsers are still generally very bad at out of the box, so having good subtitle rendering support built directly into the library would make a lot of sense to me. As someone with a lot of knowledge on this subject, I would be very much willing to help at least draft design documents for something like this, if not more.
Hey there, core contributor here! This came up during our beta effort. We very likely will be having an opt-in, non-native subtitles rendering implementation. I know at least a few team members that really want it, which adds to the likelihood that we'll add it eventually. The short version of why we started with native subtitles - bundle size and legal compliance, with a dash of prioritization and a sprinkle of hope that some looming laws will motivate browser owners to prioritize improvements. If you want to see our design decision artifact on the topic, we try to make a lot of them public (also to help the robots these days) - https://github.com/videojs/v10/blob/main/internal/decisions/...
Genuinely didn't expect 88% — what was the biggest win? Guessing it was the plugin system since that thing was a mess. Also curious if you broke any of the major integrations during the rewrite or managed to keep them intact.
Hey there, core contributor here! Starting with the last one first, since that's the easiest - VJSv10 is basically a completely new player, so no backwards compatibility planned (think Mac <=OS9 vs. OSX). We're aiming to port some of the popular plugins though and have discussed other things like migration guides and the like.
For the primary question - this is a tough one, specifically because v10 is a completely new, ground up architecture. Part of this will be feature parity - v8 does many things/handles many cases that v10 doesn't do yet. That may seem like that is an unfair comparison, and, in some sense, that's true. However, this is in fact part of the ethos of our new architecture: by building a highly composable, loosely coupled player framework with well defined architectural "seams"/contracts, you can more easily pull in "all and only what you need for your use case" (a phrase I've been bandying about). While v8 allows for some of this, it's still much harder and you still end up pulling in stuff you probably don't need for a lot of use cases.
Another one is the UI layer - v8 ended up building an entire component implementation. At the time of building, it kind of had to. v10, on the other hand, can "stand on the shoulders of giants", building on top of e.g. custom elements, or React, or any future frameworks we decide to target (and our architecture makes that comparatively easy as well).
I do suspect that once we hit true feature parity, the numbers will be much closer for "the kitchen sink." The thing is, few people (if any) need the kitchen sink.
I was just lamenting the other day about the size of video.js, which is used in my legacy web app, and looking for a way to improve that. Very keen to explore how we could migrate to v10!
Absolutely! The community has always been the strongest part of the project.
In the new version the core player itself is built as many composable components rather than one monolithic player, so we're going to invite more people to contribute their "plugins" to the core repo as more of those composable components. Versioning plugins and keeping them up to date has always been a challenge, so we're thinking this will help keep the whole ecosystem working well together.
This is amazing. We also kind of created a Player context provider and was using it to maintain/mutate player state globally. If its possible to also share any examples related to player events and new way to register plugins in V10, that would also help better understand the overall picture.
Hey there, I'm on the Video.js team! Sounds like your context provider approach is already in the right ballpark!
Some background: our store[1] which was inspired by Zustand[2] is created and passed down via context too. This is the central state management piece of our library and where we imagine most devs will build on for extending and customizing to their needs.
Updates are handled via simple store actions like `store.play()`, `store.setVolume(10)`, etc. Those actions are generally called in response to DOM events.
On the events side of things, rather than registering event listeners directly, in v10 you'd subscribe to the store instead. Something like `store.subscribe(callback)`, or in React you'd use our `usePlayer`[3] hook. The store is the single source of truth, so rather than listening to the underlying media element directly, you're observing state changes.
---
So far with v10 we haven't been thinking about "plugins" in the traditional sense either. If I had to guess at what it would look like, it'd be three things:
1. Custom store slices[4] so plugins can extend the store with their own state and actions
2. A middleware layer that plugs into the store's action pipeline so a plugin could intercept or react to actions before or after they're applied, similiar to Zustand middleware, or even in some ways like Video.js v8 middleware[5]
3. UI components that plugins can ship which use our core primitives for accessing the store, subscribing to state, etc.
I believe that'd cover the vast majority of what plugins needed in v8. We haven't nailed down the exact API yet but that's the direction we're leaning towards. We're still actively working on both the library and our docs so I don't have somewhere I can link to for these just yet (sadly)! We're likely targeting sooner, but GA (end of June) is the deadline.
I should also add... one thing we prototyped early on that may return: tracking end-to-end requests through the store. A DOM event triggers a store action like play, which calls `video.play()`, which then waits for the media event response (play, error, etc.). It worked really well and lines up nicely with the middleware direction.
Looking great. I'll give it a try later on once things stabilize a bit.
In the meantime, does anyone know what's going on in this space? Seems to me like a lot is changing over the past year. Eg: react-player new version, taken over by Mux. And also I did realize Video.js is sponsored by Mux. And also seemingly different companies working together.
OP and Mux co-founder here so have all the context on this. A lot has changed. Mux stepped in to help maintain React Player a few years ago. It wasn't getting frequent updates and Mux has a vested interest in the whole OSS player ecosystem (even if we didn't built it) because Mux Video (hosting) is player agnostic, and we get support requests for all of them. @luwes from Mux did the work to get to the new version, while making it possible to use Media Chrome media elements with React Player and consolidating some development efforts. We're still a tiny player team so that was important.
There are no immediate plans to deprecate React Player and I think it holds a special place in the ecosystem, but there will be overlap with video.js v10 and if there's specific features you care about or feel are missing, or if you think we're doing a bad job, please voice it here.
It was a similar story with Vidstack and Plyr, with Mux first sponsoring the projects. That's how I met Rahim and Sam, and how we got talking about a shared vision for the future of players.
Thank you! I’m on the Video.js team, and we’d love for you to try the library out and share your feedback. We’re especially eager to hear from developers who used or tried v8 in the past.
We’re taking a new approach to the library with a lot of new concepts, so your feedback would help us a ton during Beta as we figure out what’s working well and what isn’t.
The biggest architectural move at multiple layers of the stack was moving from monolithic controller objects to composable, tree-shakeable components, functions and state slices. Less trade-offs and more taking advantage of modern JS bundlers.
I am curious, why would anyone pick HLS over Dash in these days?
Granted, my knowledge on the matter is rather limited, but I had some long running streams (weeks) and with HLS the playlist became quite large while with dash, the mpd was as small as it gets.
Core VJS contributor here and builder of players and playback engines for too long (aka before HLS and MPEG-DASH were a thing). As others mentioned, the support matrix for HLS is very typically the proximate, pragmatic reason why folks will reach for HLS over DASH in a "pick one" situation. You're right that HLS is particularly bad for 24/7, long lived DVR/"EVENT" (to use HLS jargon) streams (fine for live, and there are some "cheats" you can do for EVENT to help there) compared to MPEG-DASH's <SegmentTemplate>-based "dynamic" MPEG usage.
Outside of that, though, the standards themselves have different pain points and tradeoffs. Some things are "cleaner"/"less presumptuous" in DASH, but DASH also has a lot of design details that were both "design by committee" (aka different competing interests resulting in a spec that has arguably too many ways to do the same thing) and overrepresented by server-side folks (so many playback complexities/concerns/considerations weren't thought through). It is also sometimes not constrained enough, at least by default (things like not guaranteeing time alignment across media representations). For what it's worth, I think there are lots of pain points from the HLS design decisions as well, but focusing on DASH here just given the framing of your question.
This is true, and the whole iOS/iPadOS/tvOS ecosystem supports HLS natively making it much easier to work with on that platform. In addition, Chrome recently added support for HLS[1] (and not DASH), so the native browser support for HLS is getting pretty wide.
HLS also has newer features that address the growing manifest issues you were seeing. [2]
All that said, I think a lot of people would feel more comfortable if the industry's adaptive streaming standard wasn't completely controlled by Apple.
Full rewrite and an intentional architecture to allow for composability and tree shaking, meaning the player bundle only ever includes the features you're using.
Did the private equity buy the domain videojs.org (did it take control of the project and you somehow regained control after selling) or was this domain (and the project) always under your control?
I'm on the Video.js team, just wanted to say thank you! Means a lot and we'd be eager to hear your experience trying it out. Feel free to drop a GitHub issue or discussion post if you ever get a chance :)
From me, this is a massive relief after we just deployed a bunch of videos to Vimeo. The next week they were bought.
I'm a one-man operation. In the order of hundreds of videos served a week. All I want is control over my own destiny. If this and a VPS can do that, that'll be amazing. Thank you for doing this.
As someone who uses VideoJS on a website with a large video library, and has generally been dismayed at the state of the plugin ecosystem every time I consider doing a major version upgrade of VideoJS, this kind of thing is great to hear.
It’s largely because (1) the React runtime is not bundled so it’s technically not apples to apples, (2) the Web Component includes CSS as well since we’re using Shadow DOM.
Basically few kB for CSS and few kB for a thin “framework” layer for managing attr to prop mapping, simple lifecycle, context, and so on.
We are designing with the goal of supporting more frameworks like Svelte and Vue specifically, even as far as React Native! We just don’t know when exactly yet but a large part of our approach in v10 is to make sure we can deliver the best possible experience to each frontend framework. It’s important for us that the integrations don’t feel like wrappers but truly idiomatic.
In the meantime, we’re hoping our custom elements will act as a good stopgap. Most frameworks including Svelte support them well, and we’re pouring love into the APIs so they feel good to use regardless of which framework.
If you’re interested in peeking under the hood, architecturally we’re taking a similar approach to TanStack and separating out a shared core from the beginning, but with one added step of splitting out the DOM as well to aid in supporting RN one day.
can anyone recommend me good, battle-tested "slider" solution for playing videos as well as displaying images from single gallery? ideally capable of handling huge galleries (hundreds of items) with lazy loading
Not a today answer, but this is something I'm excited to build within the new Presets concept of video.js v10, where we can build specific "video interfaces" beyond a standard player using the composable architecture.
We currently already use video.js, and our framework us used all over the place, so we’d be the perfect use case for you guys.
How would we use video.js 10 instead, and for what? We would like to load a small video player, for videos, but which ones? Only mp4 files or can we somehow stream chunks via HTTP without setting up ridiculous streaming servers like Wowsa or Red5 in 2026?
That's great! It looks like you have a pretty extensive integration with the prior version of Video.js, so migrating will take some work, but I think worth it when you can make the time. That said, for Beta it works with browser-supported formats and HLS, with support for services like Youtube and Vimeo close behind as we migrate what we haver in the Media Chrome ecosystem[1]. So if that's what you need maybe hold your breath for a few weeks.
What are you supporting today that requires Wowza or Red5? The short answer is Video.js is only the front-end so it won't help the server side of live streaming much. I'm of course happy to recommend services that make that part easier though.
Thank you for your feedback. Yep I definitely understand that Video.js is just the front end. I want to avoid using Wowza / Red5 and just want to serve chunks of video files, essentially, buffering them and pasting them to the "end of the stream" laying down tracks ahead of the video.js train riding over those tracks.
So I'm just wondering whether we can do streaming that way, and video.js can "just work" to play the video as we fetch chunks ahead of it ("buffering" without streaming servers, just basic HTTP range requests or similar).
You should check out HLS and DASH. If you're already familiar and you're not using them because they don't meet your requirements, then apologies for the foolish recommendation. If not, this could solve your problem.
Hey VJS core contributor here. We definitely feel that concern and we also don't yet have a silver bullet formalized. I suspect we'll need some kind of alternate implementations or feature augmentation at some point. We're currently doing things in a bit more ad hoc way, such as the interrelationship between PiP and Fullscreen (see, e.g.: https://github.com/videojs/v10/blob/main/packages/core/src/d...).
One other thing to note: because the features are "composed", we at least have a lot of flexibility here that makes me feel pretty good about the fundamentals and not "coding ourselves into a corner" here.
Yeah the composability buys you a lot of room. One central store with events, inject it into each feature, and they stay decoupled without painting yourself in.