Prashanth Nanjundappa, VP, Product Development, Progress, in an interaction with Asia Business Outlook, shares his views on the key challenges associated with streamlining application delivery, how organizations can address the growing concern of supply chain vulnerabilities in the software and hardware components and more.
In the context of critical infrastructure, what are the key challenges associated with streamlining application delivery, and how do these challenges impact the reliability and security of these systems?
The present era is much more complex than it used to be. All of it is to achieve faster marketing, which has made the ecosystem work on-premise, hybrid, mobile devices, and on edge beside the cloud. The other challenge is the complexity of application delivery. For example, let us take the case of Big Basket or Amazon having a sale. How to ensure the sales-related information is available in all the applications in the ecosystem on the same day. How do we deliver those applications? How do we ensure the errors and issues are handled during application delivery? Now, how to figure it out? It is essential to have developers and other technical background professionals to deal with it, for something non-intuitive in the compliance and security aspect.
Most of the engineers can figure out the development aspects of it. They can learn languages and mobile technologies but must be made aware of the compliance level they need to operate in. For example, while taking credit card details or home addresses. How can we protect it? What regulatory authorities are available to ensure the information collected is not leaked? What are the measures that are put in place to avoid that information leakage? For example, if you are taking someone's credit card information or if you have someone's home address, how do you safeguard it? What regulatory authorities are there to ensure that your collected information does not get leaked? And if so, what measures have you put in place to avoid that information leakage? From this, there is a clear picture of the ecosystem's complexity.
What strategies and technologies can be employed to strike the right balance between ensuring rapid application updates for critical infrastructure systems and maintaining stringent security protocols?
The first strategy that could be more intuitive is cultural. There is an assumption that application development is a developer's job, and security is compliance or the GRC team's job, and managing applications is the operations team's job.This is the most significant barrier for any organization to be successful.Developers cannot build a secure application if they think security and governance is not their job. Even the application made by developers may only be reliable and active 24/7, 365 days a year, if they are familiar with the facets of operations. Looking at any start-up that has yet to learn to bring it to production, there was no CISO to stop them. They were the developers thinking like security personnel. If there was a CISO, they were working hand in hand with the application developer; if it is being built, these measures must be considered to make it secure. It could be as simple as the usage of AWS S3for storing data, which should be encrypted. Hence, they deliver prescriptive instructions in the beginning of the product development lifecycle and do not come as a blocker at the end of development blocking the release.
These silos need to be broken, and thatis where the adoption of DevOps practices and DevSecOps practices comes in. Culture is a significant barrier, and we have to rewire the culture to break the silos and adopt methodologies like DevOps and DevSecOps.
The next is adopting zero trust principles. It's a significant paradigm, but it enables you to trust people only when proof of authentication is presented. By default, you do not charge an entity compared to the previous or the common knowledge. The trust and verify approach is followed, adopting zero trust methodologies and ensuring a clear mandate on what level of security you will have on your infrastructure. Eventually, hiring suitable candidates with competence will avoid the cultural lag gaps.
How does the integration of legacy systems and the adoption of modern application delivery methodologies, such as DevOps and continuous integration/continuous deployment (CI/CD), pose challenges for critical infrastructure organizations?
Adopting DevOps and DevSecOps methodology delivers the tools to accelerate and secure development. For instance, an organization surveyed IT and security practitioners the preceding year.They have adopted DevSecOps practices as a part of the business requirement to be fast and make frequent delivery of applications, and 56% of the users have adopted DevSecOps. Secondly, the looming cybersecurity threats; it notifies when they get attacked, so they adopted this modern application delivery. The other challenge is to integrate tools. They need to be able to integrate, and that’s where choosing a tool that can communicate the language of code, which has infrastructure as code, compliance as code, security as code, CI/CD as code, and GitOps workflow as code is important. Hence, choosing as a code approach to solution will make it relatively easier to integrate with other tools.
As far as the traditional versus modern is concerned, there is a significant gap in the choice of technology. Organizations prefer to move to cloud to adopt the latest technologies, but most of their business comes from on-prem or legacy applications. Hence, modernizing applications is the third challenge.
Amidst the growing emphasis on cloud computing and edge computing for critical infrastructure applications, what are the major challenges associated with orchestrating and managing the delivery of applications across these distributed environments?
First is the packaging application.We have already pointed out the case of the food delivery platform and the importance of timely system updates in the business. Second is the latency or the unreliability in distribution. Some might be within the network, looking into an example of a runner who has been connected through mobile connectivity and is unaware of how good the connectivity is; the device may be switched off, lack of battery, and so on. Hence, there needs to be reliability in application upgrades, software updates, and delivery.
The third challenge is application reliability; when an application is running, how to monitor its health, ensure it runs safely across the ecosystem, and take remediation actions if they are not running as expected. For instance, a fuel pump has an analog meter price component and can key in some money and automatically calculate how many liters or what fuel it delivers. If that system goes down, what would be its impact, and how to update it immediately without the help of a person.
The last challenge is security, ensuring the application is not compromised and that a user does not install an application mimicking a business application. Considering a scenario of someone from a company on a mobile device, if they start harvesting data from a mobile device, the user will launch a complaint stating that these companies have compromised data, but it is not. So, ensuring the security aspects of these devices or applications is essential.
How can organizations address the growing concern of supply chain vulnerabilities in the software and hardware components essential to application delivery?
The topic of supply chain is fascinating as these days, each software is not working independently and leverages many other software components. The first step for complex applications with endless dependencies is to address the software risk, where the software bond comes in the bill of materials. For example, bundling a piece of software into a device or creating an application should know its dependent software, which is the first step.
Second, constantly monitor using software with no vulnerability. Identifying whether the application is vulnerable is essential. So, step one is to know what the usage is, followed by a mechanism to scan continuously and identify any vulnerability, and finally, to go and remediate them.
Next is the threat vector, which is the defensive approach, constantly looking for problems.Another scenario is when a vulnerability in the system or the device has been identified, and they are attacking and how to defend against it. The moment there is an attack,there are tools that can detect such an attack. Hence, it is essential to have a system for threat detection, identifying what software is being used, and understanding the threat vector for the product and threat detection.
Finally, compliance; having a continuous compliance posture is also equally essential to ensure data safeguarding.
We use cookies to ensure you get the best experience on our website. Read more...