In April, Captain George Galdorisi (USN Ret) of the United States Navy Space and Naval Warfare Systems Center Pacific presented a paper on behalf of his team – Dr. Stephanie Hszieh, Antonio Siordia, and Rachel Volner – to the Maritime Security Challenges Conference 2010, hosted by Maritime Forces Pacific in Victoria.

In this issue, Vanguard offers the first of a two-part series of excerpts from that paper examining the impact of emerging C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance) technologies on naval operations in a world of global cooperation. The entire paper is available at: www.navy.forces.gc.ca/navy_images/public_media/georgegaldorisipaper.pdf

How will C4ISR technologies look in the future? Let us assume that we are in the year 2030, the timeframe that most futurists in the intelligence, military, technology, industry or academic communities feel comfortable in making predictions. In this future state we have finally figured out how to filter the data to keep from overloading users. At last, we are able to “compose” applications and services from multiple sources. For power users, this gives them the ability to compose their own mission specific solutions – choosing data sources, processing steps and display tools. Less advanced users, or those standing a normal watch in a well-defined mission area, get a standard “design time” solution that fits their needs.

As C4ISR technologies evolve along the most likely paths we envision, we see a future state where C4ISR is truly “joint,” with common core services and architectures and where solutions are tailored for the platform (e.g., command center, ship, aircraft, tank) and mission, rather than for the color of the uniform. We envision an evolving end state where there is effectively a library of hundreds of capability modules that can be combined in any way to form thousands of different configurations. This will be enabled by the next generation of technical solutions that follow what we today call “services oriented architectures,” “common data standards,” “normalized lexicons” and the like, resulting in a true plug-and-play solution set much like Lego blocks.

In 2030, we envision a future state of C4ISR where information is truly and completely independent of system and application. Users will have a few common client applications capable of consuming almost anything and will have other, smaller, focused applications that perform one very explicit function. As C4ISR evolves over the next two decades, we envision a world where we build fewer – but more capable – data presentation tools.

Two decades hence, common data formats, good metadata and flexible display tools will allow the user to pick any field of information and vary the display based on that – dynamically filtering the data to facilitate understanding and decision making. We envision this as a fuller maturation of today’s technologies such as GoogleEarth and other cutting-edge applications.

One of the most important and beneficial trends we see accelerating is the use of automated workflows to help guide operators through complex C4ISR processes, together with the widespread use of agent software (small, focused applications that will tirelessly do the same thing 24x7x365) to automate routine tasks, especially those that are time-consuming and do not need constant human intervention. In this future state, workflows combined with agents will be the key to dealing with – and overcoming – information overload.

This is especially crucial in a high-stress warfighting environment, where having a library of predefined workflows allows even a novice user to perform at an acceptable level. Agents attached to the workflow do the heavy lifting by gathering and filtering data. As we evolve to this end state, we envision operators “training” their web-enabled personal assistants to perform a myriad of tasks – even performing tasks such as determining what meetings and whose email is important, so that the agent can assist the user in time management.

In 2030, as C4ISR evolves, we envision a world where a combination of agents and better data visualization tools leads to better filters on what information is delivered and displayed. For example, despite quite a bit of fanfare a decade ago focused on delivering a common operational picture (COP) to all users, we now recognize that not all users need or want the same picture. Therefore, in the future, agents will automatically adjust the level of information, and even the format it’s delivered in, to reflect the various needs (and available bandwidth) of an operational level vice tactical level decision maker.

In much the same way as these trained agents work around the clock to tailor information to various levels of command, they can also be trained to alert operators to important tactical, operational or strategic events. For example, agents will watch a trip line or exclusion zone day after day without getting tired, and hundreds or thousands of them can search for indicators of interest.

By 2030 we envision a state where advancements in visualization tools and correlation and fusion approaches will yield a seamless, multi-spectral, augmented reality view of the world individually tailored to each user. All data will be fused and available in a single map-based client. We will have met the challenge of how to include, combine and present data (for example video) that was collected from multiple data sources. We will be able to present multi-spectral data that the human eye cannot see, and we will find ways to present information without overwhelming the user with textual information for the objects in their field of view.

As hybrid warfare makes military planning more complex, the ability of human operators alone – even legions of them – to effectively plan a military operation is being placed under increasing stress. Once an operation is under way, the ability of these operators to observe a perturbation in a plan, react to it, and come up with viable alternatives on-the-fly in real time is almost totally absent in our militaries today.

However, we envision an environment where operators using state-of-the-art planning systems will be able to enhance their situational awareness to the point where they can focus on effects delivered, rather than on just platforms.

Moreover, agents will be able to monitor the execution of a plan and alert the operator when there is an event that interferes with the plan – for example, a logistics aircraft with critical parts or personnel that has a mechanical problem and isn’t going to get where it’s expected on time.

As the state of C4ISR evolves over the next two decades, we will reach a state where agents begin examining the operational impact the moment a perturbation occurs, and they will be able to devise a series of options with pros and cons for the operator to choose from. A critical element of this process is that modeling and simulation is available to every user and is a seamless element of the planning process that can be revisited at any time, even during the execution phase.

Taking advantage of technologies that are evolving today, by 2030 we envision a world where virtual reality has become indistinguishable from actual reality – in fact, it will be better, because we will be able to add additional data that you can’t see in the “real” world. In other words, “augmented reality” will be what every operator will expect. Put another way, tactical forces will all have heads-up displays, not just the fighter pilots, as is the case today. We will have evolved to the state where even the lowest tactical-level user will have some version of direct neural interface, 3D projectors and very large, flexible displays.

Finally – and in some ways most importantly – we see a dramatic change in the way we will secure the information that is generated by a plethora of sensors, transported via a variety of networks, and processed, analyzed and displayed on a wide array of command and control systems. Reducing this vulnerability must be a priority in the future development of C4ISR technologies, as other advancements mean little if they can be exploited by others.

By 2030, we will be able to control data at field level – and down to the packet level in transit. We will control who can see it and determine whether it arrived intact – and if not, who touched it. This ability to tag data at the field level (and to trust those tags so we know they have not been tampered with) is vital, since it enables the agents to not only help with filtering data, but also to finally make cross-domain and multi-level security problems solvable.