Building the satellite constellation while you’re flying it? The need for large-scale space-system simulation
- May 24, 2018
- Space Operations
Many Silicon Valley companies describe their operations as “building an airplane while we’re flying it.” They innovate, release, observe, and iterate, and frequently the end product looks nothing like the original concept. It’s a proven formula to rapidly advance technology and expose latent demands in communities that couldn’t previously imagine the newly-delivered services. To some degree, the space community is moving in the same direction. Rather than fielding large, purpose-built systems whose operational intent must be well understood years before fielding, the community is looking toward distributed systems that adapt to emerging needs as they arise. However, fundamental differences related to regulatory approvals, launch constraints, deployment timelines, and the cost of failure still make the space-economy dynamics fundamentally different from those of the Internet economy.
But unlike an Internet startup whose “move fast and adapt” philosophy encourages them to get fielded as soon as possible, the owner of a distributed, adaptable space system must maximize the effectiveness of the initial system – both functionally and economically. And therein lies the challenge. How good is good enough, and will the functionality embedded in space vehicles be adequate to adapt as the mission inevitably evolves? Initial systems are always a compromise of what we think we need and what we can afford, combined with a heavy dose of uncertainty about what the ultimate mission will be. While the ability to upgrade a space system’s OS definitely improves its chance of success, the hardware and even the constellation cannot adapt at the same rate, and certainly not for the same cost.
The challenge comprises both the lack of prior knowledge about proposed novel systems and uncertainty about the ultimate application. Using simulation technology coupled with trade-space analysis give us the same rapid prototyping experience as Internet startups – without costs we can’t afford. This combination delivers insight into the important break-points for answering wide-ranging questions like, “how good is good enough?” While “simulation” and “tradespace analysis” can be an incredibly broad swath, certain attributes prove to be indispensable:
- Rapid model building. For our industry, “move fast and adapt” means doing that in simulation space – for both systems and missions. And needless to say, the models must be accurate.
- Easy automation. To gain insight and build intuition, we need to look at a lot of cases. Programming every form of excursion while modeling uncertainty becomes an anchor against rapid discovery. Tradespace analysis must be easy to automate.
- Integrated insightful analysis. Finding inflection points in multi-dimensional, non-linear analyses is … tricky. Insightful decision analytics are an absolute must.
- Scalability. In the same spirit that we need tools to rapidly construct physically accurate models, we need the infrastructure to accelerate (simulated) time. That means scaling our calculations.
- Natural transitions for increasing model fidelity. Completing high-level architecture studies satisfies a major milestone, but the job doesn’t stop there. It’s really just the first step of an ongoing design process. The architecture point designs form the basis for subsystem, platform, payload, and component engineering. And all the models must work in concert, and probably cooperate in subsequent levels of iterative design. When done well, this leads to the “digital twin,” which is so important when we move to the operational phase (a topic for a future blog).
Learn more about Digital Mission Engineering.