Technical companies today seek Cloud-Native status, desiring transformation to enable dynamic management for containerizing, automating, and orchestrating through microservices. Those words all sound great. During the early days of film, Howard Hughes filmed a movie called. “Hell’s Angels”, one of the first movies depicting aerial combat. He waited for clear skies so everyone could see the action. Unfortunately, with no clouds, there are no reference factors to judge speed and the entire movie was reshot because instead of being stunning and exciting, it was boring. Everything was filmed again with clouds for reference to judge speed. A similar principle applies to successful Cloud-Native transformation, if your DevOps was effective before with clear metrics demonstrating speed, your effectiveness will continue. Moving DevOps practices to the cloud means expanding and modifying practices to observe activity, integrate analytics, and adapt telemetry to maximize data visibility.
DevSecOps builds security practices into the three ways of flow, continuous feedback and continuous experimentation. None of these practices need to change to support a cloud-native goal. All too often, companies undergoing transformation feel adopting cloud guarantees the three “As”, increased access, expanded awareness, and abdicated responsibility. While the first may be true, the second requires effort, and the third should never be adopted. Successful business flow requires understanding existing constraints, seeking to reduce those barriers, and striving for continuous improvement. Cloud access offers the opportunity to create increased awareness through monitoring more north-south and east-west traffic than traditional networks. For all users, increased awareness should not generate abdication but increased control to maximize resource usage while minimizing work, a true agile principle made possible through continuous observability.
Clouds are modeled after large bilious mist, but network clouds should not hamper visibility. Adopting a cloud model can increase visibility if the proper steps are taken. First awareness should be built through common tools, understanding Deployment Frequency (DF), Mean Lead Time for changes (MLT), Mean Time To Recover (MTTR) and Change Failure Rate (CFR) as per DORA's key KPI's.. These early metrics create awareness of smaller measurements within DevOps pipeline functions about which formats run, where do operations stall, and what microservices create the most value. Cloud security should mirror on-premise security with increased emphasis on virtualization and the Cloud Security Alliance (Cloudsecurityalliance.org) offers some great tools to create additional metrics. Digging through smaller observations creates expanded awareness and follows the second way to provide continuous feedback about DevOps processes.
As a cloud-native pattern, one acceptable solution considers securing the system from the origin. Creating a Minimum Viable Platform (MVP) securely means integrating continuous testing about secure functions. Enabling SAST, DAST and TVM through the pipeline and then RAST for your applications. Testing requires visibility and proving what was visible requires data. Coordinating data requires developing metrics for your test. The simpler the test, the easier to automate and the more data potentially becomes available. Enabling baked-in security through a testing culture supports DevSecOps values and enables blame-free discussions. More complicated tests reveal a different picture but the goal for security should be to identify metrics tied to MVP solutions to create value.
Feedback value increases through continuous experimentation. Organizations should use the expanded access to test different metrics. Most cloud structures like AWS or Azure use a pricing structure charging for downloading data. This structure allows to upload large tools to provide an experimental framework generating metrics while only downloading the small instances detailing functions. The better the dashboard, the more accurate the picture and the more precisely one can work to remove constraints. One recent example, Opsera, uses a continuous orchestration model to show a clear deployment path for multiple metrics to generate increased awareness via their 360 degree unified insights. The end goal should be building, “telemetry everywhere”. If a function runs in cloud, and creates business value, comprehensive metrics should be generated and compiled.
Opsera offers continuous orchestration through implementing a declarative structure based on code, build, artifact, security, quality, and deploy with a BYOT (Bring your own tools) mindset. This platform structure eliminates lock in and allows integrated analytics. Most structures today, including GitLab or Jenkins, offer limited analytic options and leave the user searching multiple dashboards to track down individual data items. Aggregating and correlating multiple functions with tools declaratively allows each user to identify the most impactful telemetry aspects. Ansible simplifies deployment, GitLab delivers a pipeline but each only delivers a portion of the telemetry required to make business decision. Opsera data transformer engine normalizes the data first and then aggregates the data cross various tool categories and provide a contextual view to DevOps engineers and managers to make smart decisions across 6 different dimensions (Planning, SCM, pipeline, security, quality & operations). Projectable rather than predictable as projection requires solid footing to visualize where data will be rather than guessing at future results. Integrated telemetry, from everywhere, delivers value through increasing the ability to pivot, maximizing flow, improving feedback and empowering the declarative structure to enable rapid experimentation.
Cloud-native’s strongest message should be security depends on observability. The new options delivered here might be sufficiently revolutionary to change the term to, ”Opsera-vability”. A sound devops culture and effective practice depend on organizational transparency. Just like with Hughes and the airplanes soaring past clouds, code movement only appears through a relation to other business objects. Whether an organization is cloud-native, currently transforming, or maximizing on-premise resources, transparency requires a full-view versus a soda straw across a few selective processes. More metrics allows better aggregation to identify constraints and produce value. DevOps foundations are built on Agile practices and Agile values pivot and change response over detailed planning. Opsera’s continuous orchestration and analytics allow real-time change and then produce results nearly as quickly to allow data-driven innovation. Sometimes new companies try to play buzz-word bingo to deliver the next functional product. Opsera offers an end to end solution, automating orchestration, integrating declarative functions, and merging analytic reports to provide value-generating situational awareness. If your organization is currently making a cloud migration, or merely seeks to enhance value, embracing DevOps culture should be the first step, and the second should be pursuing a continuous orchestration method like Opesera.
Dr. Mark Peters is a DevOps Institute Ambassador and USA chapter chair who is currently working at integrating security practices into the development pipelines for a DoD cyber weapon system and authored Cashing in on Cyberpower.