The latest “technology pulse” poll from EY reports that 78% of tech executives are more concerned about the cyber security threats of today than those of a year ago. Those IT leaders who say they are increasing their IT budgets most often report having a plan to prioritise cyber security (74%), according to the online survey of 254 IT executives.
EY west region cyber security leader Ayan Roy says 7% to 10% of the cyber security budget goes towards improving security in the software development process, while 4% to 7% of the software development team’s budget is spent on improving cyber security in the development process.
There is a good reason that cyber security spending is on the rise. The advancement of technology has a positive effect both on business productivity and on the ability of hackers to improve how they can attack organisations and individuals.
Take artificial intelligence (AI) and the potential of large language models like ChatGPT, for example. Like many security researchers, Max Heinemeyer, chief product officer at Darktrace, is tracking the acceleration of AI-powered cyber attacks. “Attackers can use machine learning to automate their processes and become more efficient at scaling up their attacks,” he says.
But the flip side of AI being used to automate cyber attacks is the fact that AI-based testing tools can help software development teams identify potential issues far quicker than manual testing. According to crowd-testing platform Unguess, 80% of testing is simply reiterating the checks that the software already has – automating some of the human-driven tasks can save time. Bugs can be identified automatically, and the system can generate the test logic and perform tests on its own.
Thanks to predictive models, AI can also identify various testing parameters and create a test plan. It is possible to examine a lot of data, use reusable test cases and produce thorough test results by automating tests with AI, according to Edoardo Vannutelli, co-founder and test automation leader at Unguess.
“AI algorithms can analyse large volumes of data, including codebases, user inputs and historical testing information, to generate intelligent datasets. These datasets can cover a wide range of scenarios and identify potential vulnerabilities, improving the test coverage and accuracy,” he says.
Shift security back to coders
The principles of security by design offer a starting point for secure coding. EY’s Roy says security is becoming embedded in software development: “Shift left is a leading practice, where the goal is to have software development teams incorporate security early on in the lifecycle – typically in the requirements and design stage – and not as an afterthought.”
For instance, software developers need to check that any input to a piece of code is only allowed to originate from a known – and verified – source. When developing secure code, input data the application reads is subject to rigorous boundary and content checking, says Petra Wenham, a volunteer at BCS, the Chartered Institute for IT. If the input is not conformant, she says that data should be completely destroyed.
Such checks help minimise buffer overflow errors. This occurs when executable code is injected into an input data field of a software application. If the application does not validate the data, it can fail in such a way that enables the injected code to perform unauthorised actions.
Similarly, as Wenham notes, the outputs from a piece of code should only come from within the code itself. Output data should only be sent to verified destinations and not allowed to use memory outside of what has been allocated.
She says the operating system (OS) on which the code runs is responsible for allocating, monitoring and controlling memory usage. From a security perspective, its role is to stop one piece of code from violating the memory allocated to other pieces of code.
“The OS should only permit verified (certified or flagged) code to run; non-verified code should be isolated [and] prevented from running,” adds Wenham.
The link between digitisation and secure coding
The Faroe Islands has drawn on digitisation efforts and initiatives in Denmark, Estonia and Iceland, which means its software complies with all EU security standards.
Janus Læarsson is chief architect at The National Digitalisation Programme of the Faroe Islands. The Faroe Islands’ digitisation strategy involves building a digital infrastructure to modernise government services and deliver better and faster experiences for its citizens. With limited time and budget adding to the existing talent shortage challenge, Læarsson says the IT team needed an approach to software development that could provide an alternative to high code and allow external developers to guide and support the development process
OutSystems was selected as the low-code platform to enable teams of developers to participate in the process of creating a system that is complex and secure enough to power the Faroe Islands’ national digitisation initiative. For Læarsson, one of the benefits of a low-code software development platform, such as OutSystems, is that it is regularly updated with security patches for the libraries it uses when creating low-code applications.
Discussing secure coding, Læarsson says: “From criteria’s definition through coding and release – our quality assurance processes include both automated and manual testing, which helps us ensure that we push and maintain high standards with every application and update we do. The software we develop is tested for both functional and structural quality standards – from how effectively applications adhere to the core design specifications, to whether it meets all security, accessibility, scalability and reliability standards.”
Peer review is used to run an in-depth technical and logical line-by-line review of code to ensure its quality. Within the National Digitalisation Programme, Læarsson says: “Our low-code development projects are divided into scrum teams, where each team creates stories and tasks for each sprint and defines specific criteria for these.”
These stories enable people to understand the role of a particular piece of software functionality. “When stories are done, they are tested by the same analysts who have specified the stories. As part of the demos, the stakeholders also have their voice and can ultimately approve or reject specifics. When major components like the citizen portal or business registry portal are to be released, the stakeholders execute test cases, specified by our analysts,” says Læarsson.
Getting stakeholders involved is a key part of ensuring that software development projects are as secure as they need to be, according to Ed Moyle, a member of the ISACA Emerging Trends Working Group.
“There are a legion of possible ways for stakeholders involved at any stage of this process to either introduce or mitigate risks depending on the processes they follow, their training, their awareness and numerous other factors,” he says. “Wherever possible, a risk-aware programme should be designed to reduce, manage and mitigate software risk in a way that takes into account the concerns of all stakeholders involved in the project.”
Moyle recommends that IT leaders should aim to bolster the actions stakeholders require that favour risk reduction outcomes. But the coding is just one aspect of a thorough application security strategy.
“While coding is arguably the most visible step along the software development and release process, it’s also not the only place where we should focus,” adds Moyle. “Risk management efforts should include the whole lifecycle.”
This means that those responsible for the security on an IT project need to understand and account for the whole lifecycle holistically. On top of this, he recommends reaching out to more stakeholders. “Extend your planning to include areas outside development that nevertheless hold a stake. Include and deputise testing personnel, business analysts, project and product managers, support teams, sales, marketing, HR and legal – bring them under the umbrella of caring about the security of what you build,” he adds.
Moyle also urges IT decision-makers looking at hardening their application development projects against cyber attacks to assess four areas of the software development process:
- Maturity – ensure processes are mature so that they are resilient to employee attrition and outcomes are consistent.
- Transparency – ensure transparency in the supply chain of the components and libraries that our products in turn rely upon (and being able to provide that transparency to customers).
- Compliance – ensure compliance with the various (commercial and open source) licences used in developing software.
- Design simplicity – ensure the design lends itself to being easily understood and evaluated.
However, he says these things are just “the tip of the iceberg” when it comes to the considerations that can and do impact software risk as a practical matter, adding: “You could just as easily include things like: fit for purpose, design rigour, supportability, testing coverage, code quality, time to market, and numerous other things that impact the risks associated with how we design, develop, test, deploy, maintain, support and, ultimately, decommission our software.”
For those involved in software development projects and programmes, security needs to be cemented into the mindset of the developer and IT operations teams. While automated testing and AI can be used to identify programming bugs, understanding the implications of adding a new feature or data feed, or introducing an application programming interface, should not be an afterthought.