I wrote a blog post
about how the AV industry is transforming into an information technology (IT) industry. In the article, I equated the old video feed-based AV industry to network circuit switching, while I compared IT-based video technology to packet switching, the typical network infrastructure in use now. It is very common in today’s AV industry to see nearly everything being done over networks rather than video cables, so the industry has made a partial transition to IT. Some in the industry, however, are still doing the same old things with networks that were done with video cables – they’re switching video streams between input and output devices. This is a step towards becoming truly IT-centric, but it is not the endpoint. Sure, they’re moving data, but they’re not taking advantage of the ability to process that data the way a true distributed visualization system can do.
Now what does it mean to “process the data?” Processing the video data is very different from decoding a stream. Certainly, video streams must be decoded to play them, but in the IT-centric systems I’m talking about, that decoded stream is just data that can be processed and manipulated. Therefore, it can be filtered, modified, rotated, positioned, and sized as needed. It means monitor boundaries are not content boundaries
. Content can be scaled from many items showing on a single Direct-View LED mesh or projector to large content items covering all or parts of many displays in a video wall, as shown in the following pictures.
These pictures show content overlapping, crossing monitor boundaries, and sharing space with other content on a single monitor. This can be done because the content is just data that can be processed and displayed, no matter whether it is an image, a video stream, a movie, or a data stream. Even compound content, like slideshows that contain other content items, can be manipulated, just like a simple video or a 2 billion-pixel image.
Lots of video wall systems can show one content item per display or group of displays, but far fewer can show multiple items per display, especially if some of those items cross monitor boundaries. Showing multiple items is a hard thing, even for solutions claiming to be IT, because they tend to just direct a video stream at each player computer driving a display. It is easy for Hiperwall, however, because we treat everything as data on a nearly endless pixel canvas that can show an enormous amount of content in detail. We can show hundreds of simultaneous sources across hundreds of displays of nearly any type and in nearly any configuration. We can do this because we are a true distributed system and don’t use a centralized server to process all the feeds. We let the computers driving the displays do the work of decoding, processing, and drawing content items they need, so there is no content bottleneck as there is in so many server-based systems.
One of our competitors recently put up a comparison chart trying to show how they compare to Hiperwall. Many of the claimed limits for Hiperwall are demonstrably and laughably untrue, but their feature claim I was most amused by is Picture-in-Picture. They claim we don’t have it, while they, of course, do. Well, since you’ve read this article, you know we don’t need to call out a Picture-in-Picture feature, because we can put any content wherever and whenever, so we can trivially put one content item in front of another one, and we can add transparency, so it is even better than traditional Picture-in-Picture. But just to prove the point when they read this, here is a video of a live video stream moving in front of another video stream in front of a huge live data feed of air traffic in the Los Angeles area. The moving video stream even becomes somewhat transparent as it moves in front of the other video stream. Sure, some of our customers don’t need to do all those things are once, but if they need to they can.