Blog

1 min read

Why Companies Need an Executive Video Wall Command and Control Center

By Jonathan Gieg on Aug 21, 2017 4:49:00 AM

Managing instantaneous information flow across a company to mitigate public relations disasters and effectively control crisis situations is extremely important in our digital age. An executive command and control room with multiple feeds of information that includes real time social media and broadcast news stations can be utilized to control and reduce the effect of reputation-damaging events and emergencies. Not having a platform to gather and review streams of information or to collaborate with officials onsite hinders a company’s ability to make risk mitigating decisions.
 
A command and control room would give any executive a significant advantage during large events and/or emergencies. It would allow them to assess numerous video feeds, news stations, social media activities, situational feeds of information, employee FaceTime feeds, security feeds, shared airport command center content in real time as well as standard and customized applications used to organize the information and collaborate before making a decision. Monitoring social media activity on a LED video wall is extremely beneficial for gathering intelligence on tweets and other communications to help diffuse incorrect information as quickly as possible. When utilized correctly, a video wall allows for a complete visualization of the magnitude of a situation and can significantly alter the way a company chooses to respond to an event.
 
During times of non-emergencies, the video wall can be used for business proposals, training and debriefings. Business proposal and training materials such as videos, online resources and PowerPoint presentations can be elegantly displayed on the video wall system and multiple feeds can expedite and improve the debriefing process.
 
Every company stands to benefit from having a centralized executive command and control center to monitor and control the company’s security and information flow. It is an integral asset that improves visual situational awareness and resolves inefficiencies within numerous levels of a company. It is unmatched in emergency situations in providing the most up-to-date information that enables strategic decision making and provides seamless collaboration between executives, upper management teams, operators and first responders.                    

 

Topics: insider
Continue Reading
4 min read

AV vs IT: Disruption or Evolution? | Hiperwall Video Wall Software

By Dr. Stephen Jenks Co-Founder / Chief Scientist on Aug 4, 2017 4:44:00 AM

Comparing traditional AV approaches with more IT-style approaches is important in understanding the evolution of content distribution technologies used for video walls and digital signage. A few examples are from Hiperwall technology for illustrative purposes, though the ideas are general enough to apply to other IT-based systems and technologies.

The purpose of the technology behind a LED video wall or a digital signage system is to get content (typically video and images) to the monitors or projectors that make up the system. That technology may be simple or complicated depending on how many inputs and outputs are needed and how the content needs to be presented.

The industry has been moving away from traditional Audio Video (AV) technologies and more towards Information Technology (IT). Recent editorials in Commercial Integrator ( http://www.commercialintegrator.com/av/adam-forziati-av-industry-impressions/ followed by http://www.commercialintegrator.com/av/defense-adam-initial-impressions/) show this transition is well under way. The technology has become digital, and though the industry still needs AV knowhow (acoustics, sightlines, user experience), it also needs IT capabilities for networking, security, scripting, and understanding of resource usage.

Since the goal of content distribution for video walls and digital signage is to get content to the displays, I’m going to use a network analogy to explore some of the technologies. Some of us may remember the great debate between circuit-switched networking and packet-switched network. The phone companies, who were used to connect calls (thus switching circuits), were proponents of circuit switching. Circuit switching is a process in which a dedicated connection is established from the source to the destination before information can be sent. This dedicated connection offers reliable bandwidth, but has very poor resource utilization (much of the bandwidth may be unused at any given time).

The other approach, packet switching, shares bandwidth by routing chunks of information (packets) from their source to their destination along a data path that can be shared by many other packets. This approach allows for much better resource utilization and has been nearly universally adopted. As a side note, the telephone companies tried to compromise by proposing Asynchronous Transfer Mode (ATM) that was packet-based, but provided virtual circuit switching at the time a connection is established. It is used for ISDN and deep within some of the backbone networks.

How does this networking analogy apply to visual information distribution? The traditional matrix switch that switches an input to one or more output monitors is like a circuit-switched network – the path is dedicated from input to output. If that functionality is all that is needed, then such a non-shared approach is appropriate. We can argue that the traditional server-based video wall systems are also like the circuit-switched networks. They may not appear so at first glance because the inputs can be shown on any monitor or maybe several monitors, so the connection from source to destination (monitor) is not obvious. However, we need to think of the hardware server as the destination in this case. It is the component that renders the video to the displays, thus it is the constrained resource. While the server is a powerful machine, it has to accept all of the inputs and drive all of the displays so it can be overwhelmed if it has to do too many things at once.

Distributed system-based content delivery systems tend to lean towards the packet-switched network analogy. The network connecting all the sources and the many computers that drive the displays is a shared resource that is designed to support the delivering of many feeds to many destinations simultaneously. In addition, the amount of resources (computers, memory, network switch bandwidth) scales as the number of displays grows. It is entirely possible to overwhelm individual resources, but that doesn’t break the entire system. For example, one display computer could be overwhelmed by receiving and trying to decode more video feeds than its CPU can handle, but in a well-designed distributed system, only that display is affected while the rest of the system operates normally. As with packet-switched, even if one node gets bogged down, packet traffic still flows everywhere else in the network.

For example, the Hiperwall software treats all the connected displays as a huge canvas. Any of the content can be shown anywhere, from a small portion of one display to partially covering several displays to the entire video wall or signage system. All the content is digital and shared as needed on the network (capture cards can digitize video sources, for example), and thus can be distributed to any or all the computers driving the displays. Of course, putting too many 4K videos or high-bandwidth streams on a single display computer may cause that computer to stutter while playing the content, but the rest of the system is unaffected by that overload condition. Because of the distributed, shared nature of the system, it can display a huge amount of content, including multiple streamed or stored content items per display, while scaling to hundreds of displays showing live streams.

There are many products in the industry that take advantage of IT (IP networking, modern processors, etc.), but are really providing point-to-point solutions (digitize video here, display it there). These are like Asynchronous Transfer Mode, in that they provide packet-based virtual circuit switching. There isn’t anything wrong with these products if you need the equivalent of a really long HDMI cable, but they don’t scale and aren’t as flexible as systems that provide a shared canvas to draw many content items.

In short, the content display industry, including video walls and digital signage, has been transitioning over the past decade from traditional dedicated AV hardware (circuit switching) to a commodity-based IP networking IT infrastructure that can provide resource sharing, flexibility, and fewer constraints. This transition is lowering costs, enhancing capabilities, and supports unprecedented scalability, which is important in a world where displays are all around us. As displays become ubiquitous, the hardware, software, and networking technology to drive their content must scale appropriately.

 

Topics: insider
Continue Reading
3 min read

Hiperwall Version 4.6: Streaming at its Best | Hiperwall Video Wall Solutions

By Sung-Jin Kim, Ph.D. Co-Founder / Chief Technology Officer on Aug 3, 2017 4:40:00 AM

A few years ago we introduced a streaming capability in the Hiperwall system. Our Streamer was a small application you could run on the PC and it was able to send a desktop screen at 60 FPS to the wall. If the machine had a device capture such as HDMI capture, it could stream whatever the device pushed out.

Although this worked well, we saw room for improvement. The question for us was: What is the cost of improving performance on the computing/networking resources? To answer that, let’s take a look at the four steps involved in streaming a desktop screen, for example.

Step 1. Capture
Certain resources are required to capture the desktop screen. This process can be fairly CPU intensive, however starting with Windows 8.1, Microsoft began providing a very efficient method of capturing a screen. Because of this, the CPU usage cost was lowered and became reasonable, at least from our point of view.

Step 2. Compress/encode
The raw data of a captured screen is too big to send over a network -- it needs to be compressed in some way. In our original Streamer, we used a light compression technology assisted by a modern GPU such as an Nvidia graphics card. This methodology required a more powerful GPU, but CPU usage was reduced. Compression became easy to render and display on the receiving side, but it also used a lot of network bandwidth.

Step 3. Sending over the network
Even though we compressed the data, it still required a massive amount of bandwidth to be sent over the network. Streaming a 1920 x 1080 screen at 30 FPS consumes about 51% of a gigabit network. With a properly configured switch, multiple streams from multiple sources could go to multiple destinations, but misconfigured switches could easily overwhelm the entire network.

Step 4. Render and display the stream
On the receiving end, a portion of the video wall that will display the stream receives it and renders it. Our old approach using light compression was extremely easy to render. Fairly low-level machines could be used to display streams without too much effort.

As you can see, our strength in utilizing resources was in Step 2 and 4. Our greatest weakness was in Step 3 where the network bandwidth became the bottleneck of streaming. Even though we could perfectly meet the customer demands of having multiple video streams on the LED video wall, the network infrastructure had to be setup appropriately and required special care.

Today, people want to stream higher resolution screens with higher frame rates. Upgrading to a larger network bandwidth such as a 10 gigabit network will accommodate this of course, but it doesn’t actually solve the problem.

So, we looked back into our design and decided to focus on improving Steps 2 and 4 in order to reduce network bandwidth usage. This makes the most sense since computing resources such as the CPU and GPU continue to get faster and more powerful, while network bandwidth remains mostly the same. In a perfect world 10 gigabit networks would be widely available, but we are not holding our breath for that to happen.

In Step 2, we added an industry-standard H.264 encoding engine which encodes a 1920 x 1080 screen at 60 FPS comfortably using modern CPU/GPUs. Then we added our own special patent pending sauce to serve our specific purpose.  The result is that the network bandwidth (Step 3) is now reduced to about 50-100 Mb/s, or 5-10% of a gigabit network which reduces the burden on network infrastructure and enables more simultaneous video streams across the video wall. As an added benefit, the encoding engine also allows for much higher image quality than our old compression engine.

In Step 4, the receiving side decodes and renders the stream, which is more work than before. However, since most CPU/GPUs these days include very efficient methods of decoding H.264 streams, the CPU/GPU resource usage increases only by a little.

With this rebalancing of resource usage, we can achieve very efficient streaming with improved image quality and a network bandwidth reduction of about 10 fold.

We place a lot of emphasis on future-proofing. Our software technology can easily adapt to hardware improvement. When more powerful encoding/decoding hardware engines are available, our software engine will be able to utilize them to provide higher resolution streams with minimal network bandwidth increase.


Topics: insider
Continue Reading

Featured