Performance engineering broadens and deepens both in terms of scale and scope.

Emerging trends in performance engineering promise more responsive systems, in less time, with less risk and less impact. But there are a few key issues to be aware of, say five experts who recently discussed the state of performance engineering at a recent panel discussion.

The panel, sponsored by Micro Focus, included a moderator Richard Bishop, Senior Quality Engineer at Lloyds Banking Group; Paul McLean, performance consultant at RPM Solutions; Wilson mar, performance architect at McKinsey; Ryan foulk, president and founder of Foulk Consulting; and Scott moore, senior performance engineering consultant at Scott Moore Consulting.

Here are the top trends and issues these top experts are seeing a game-changer, and what your team needs to know about them.

Massive scalability changes things

Auto-scaling sounds like a wonderful feature; a cluster can simply add servers when demand reaches a certain predefined level. McLean from RPM Solutions highlighted how this is changing the nature of performance engineering work.

McLean said the issue itself is also changing. “Can the servers process 500 transactions per second?” becomes “How do servers handle a doubling of the workload?”

There is a “spin-up” period for the new servers, said Bishop of Lloyds Banking Group. It can take 15 minutes from when a trigger wire warns that the cluster needs a new web server and when that server is actually online. This delay could cause a delay in performance, or even an overload, which could be noticeable by the customer.

For this question, human experts need to define the limits of autoscaling. How much CPU, memory, disk, or bandwidth does the cluster need to add capacity? Computing charges in the cloud are typically by the hour, so if those limits are too low, the business will end up renting capacity that it doesn’t need.

If the metrics are too high, it leads to the lag and overload issues identified by Bishop. McLean also suggested companies watch the reduction. In other words, after a peak in traffic, the number of servers should decrease. If not, the company will continually pay for the rental of as many servers it has ever needed in the cloud, defeating the purpose of the upgrade. automatic scale.

Globalization will rebalance the equation

McKinsey’s Mar pointed out some different issues: the rise of a global workforce and computers that can reach further due to advancements in technology, both of which will change performance engineering as a discipline.

Due to the global pandemic, many companies have allowed their employees to work from home or indeed from anywhere with internet service and power. A sufficient number of workers seized this opportunity for the return to the office to become problematic. As a result, many companies are turning to remote hiring, which means more and more people will increasingly access corporate IT resources from further afield.

Today, Mar sees performance testing as something that mostly happens inside the data center. But with the new satellite and other types of communications services, it will be possible to simulate true end-to-end loads from anywhere, to move workloads out of the enterprise with fog computing, to allow the Internet of Things to proliferate and see more streaming videos all over the world.

As bandwidth increases, Jevon’s Law explains that people will use more of it. As a result, programmers will create more complex applications (because downloading a large website with a lot of APIs is suddenly less important) and customers will choose to do things that require more bandwidth.

Mar said performance testers need to be prepared for these changes. Foulk from Foulk Consulting suggested that humans need to consider and create better non-functional requirements to anticipate those needs.

All of this takes performance engineering from a reactive role to a predictive role. The Bishop of Lloyds Banking Group said that while predictive analytics tools start to emerge, all too often companies are just throwing the software over the wall and hoping for the best.

Speaking of more, all of the panelists agreed that the pace of software delivery is increasing and performance is fortunate to be part of a tight improvement-feedback loop. One way to do this is to use the continuous integration pipeline.

Add performance to the CI / CD pipeline

Panelists agreed on the potential value of including performance testing in the continuous integration / continuous delivery pipeline. Learning how things go immediately after a change is introduced makes it easier to debug and troubleshoot.

Moore of Scott Moore Consulting noted that the speed of technology adoption is increasing across the board. As an example, he said virtualization technology took maybe a decade to mainstream, while container adoption took maybe half that time.

By extrapolating, people today are experimenting with AI and ML in performance engineering and putting performance testing in the CI / CD pipeline. Expect these technologies to become the norm sooner rather than later.

Moore reminded the audience that while containers may be late for use in development, they have yet to take off for testing, especially performance testing. All of the experts agreed that there were challenges in integrating performance testing into the pipeline, in building environments, increasing data, increasing demand, and performing meaningful analysis inside a loop. CI / CD narrow.

Moore speculated that a test environment running in containers, or Kubernetes, might be easier to build and run. The real challenge might be to get a trial and meaningful results in five or ten years.

Learn what you need now

Moore said, “I keep hearing ‘CI / CD’ from every customer, but it’s just because they want to go faster.” We are getting closer, he added, to being able to create the necessary environment more quickly, prepare the test and the scenarios, launch it all, spit out the results, have an algorithm read the results and explain what it is. ‘We have to do. he.

“Big business is doing this now. If you don’t study how to do this, now’s the time to learn,” Moore added.

McLean of RPM Solutions suggested this might be harder to do than it looks, especially for companies with limited resources. He reminded the panelists that getting a large amount of data, a sufficiently large test system and all the data preparation configured, and all the tests run, accelerated and dismantled – all in a matter of minutes – is a major change.

This is especially true compared to the configuration and multi-day testing that many companies are using today.

Still, getting the right tests to run automatically on a tight enough schedule can be the next challenge. It can be intimidating, with people sometimes choosing to “run” short tests, often because they don’t feel safe admitting the facts.

Create a safe environment

All the data in the world won’t make a difference unless someone enters it, points it out, and explains what the data means and why the organization should do something with it.

McKinsey’s Mar pointed to a recent study from his company suggesting that the main driver of performance in organizations is not agile, DevOps, CI or CD, but rather psychological security. Groups where people feel safe to report problems or failures and offer solutions are the only environments where new ideas have the potential to emerge.

Mar suggested that a new approach is to have performance testing work on the team itself, not as a task performed by an outside group as part of the checklist. It makes performance engineering more than just testing. This allows metrics to be things the team is interested in improving, and not some sort of external newsletter.

Final courses

The panelists left the audience with a distinct impression of two opposing forces in performance engineering. Systems are becoming more and more complex and therefore require more and more complex tools to manage and analyze performance problems. Yet the human element is the main difference between success and failure.

Another trend is the value of feedback throughout the process, not only to inform about the performance of a particular version, but also to learn what customers are doing to provide information on what to build next. Finally, there is a gap between the state of the practice and what might be possible.

Clearly, organizations will need to leverage performance engineering to create a better customer experience.

Keep learning