I believe that performance tuning/analysis/testing is one of the most complex tasks in the IT world. I read articles from well known IT people who have been known for well founded statements but when they did performance tuning/analysis/testing they have been proven wrong.
I am planing a series of performance tuning/analysis/testing posts about storage performance with linux file systems, Docker, MySQL, MS SQL Server and more. So to avoid the common mistakes I searched the INTERNET for some scientific approaches.
Andrew Pruski wrote a nice article and Raj Jan wrote a book about this topic. Even this presentation shows the common mistakes. I decided to use these approaches and hopefully provide well founded posts.
I added some points to Andrews approach. I will call it 8PP (8 Phases of Performance tuning/analysis/testing) now, because I will often reference this approach.
AND Don’t forget: “Performance tuning/analysis/testing is a continues process“. What you consider to be optimal for your workload today may not be optimal tomorrow.
I will link a real example showing the 8PP soon.
8PP – The 8 Phases of Performance tuning/analysis/testing (Draft 1.3)
Phase 1 – Observation
- 1.1 Understand the problem/issue
- Talk to all responsible people if possible
- Is the problem/issue based on a real workload?
- Is the evaluation technique appropriate?
- 1.2 Define your universe
- If possible isolate the system as much as you can
- Make sure to write down exactly how your system/environment is build
- Firmware, OS, driver, application versions, etc…
- 1.3 Define and run basic baseline tests (CPU,MEM,NET,STORAGE)
- Define the basic tests and run them while the application is stopped
- Document the basic baseline tests
- Compare to older basic baseline tests if any are available
- 1.4 Describe the problem/issue in detail
- Document the symptoms of the problem/issue
- Document the system behavior (CPU,MEM,NETWORK,Storage) while the problem/issue arise
Phase 2 – Declaration of the end goal or issue
- Official declare the goal or issue
- Agree with all participants on this goal or issue
Phase 3 – Forming a hypothesis
- Based on observation and declaration form a hypothesis
Phase 4 – Define an appropriated method to test the hypothesis
- 4.1 don’t define too complex methods
- 4.2 choose … for testing the hypothesis
- the right workload
- the right metrics
- some metrics as key metrics
- the right level of details
- an efficient approach in terms of time and results
- a tool you fully understand
- 4.3 document the defined method and setup a test plan
Phase 5 – Testing the hypothesis
- 5.1 Run the test plan
- avoid or don’t test if other workloads are running
- run the test at least two times
- 5.2 save the results
Phase 6 – Analysis of results
- 6.2 Read and interpret all metrics
- understand all metrics
- compare metrics to basic/advanced baseline metrics
- is the result statistical correct?
- has sensitivity analysis been done?
- concentrate on key metrics
- 6.3 Visualize your data
- 6.4 “Strange” results means you need to go back to “Phase 4.2 or 1.1”
- 6.5 Present understandable graphics for your audience
Phase 7 – Conclusion
Is the goal or issue well defined? If not go back to “Phase 1.1”
- 7.1 Form a conclusion if and how the hypothesis achieved the goal or solved the issue!
- 7.2 Next Step
- Is the hypothesis true?
- if goal/issue is not achieved/solved, form a new hypothesis.
- Is the hypothesis false?
- form a new hypothesis
- Is there a dependency to something else?
- form a new hypothesis
- If the goal is achieved or issue solved
- Document everything! (You will need it in the future)
- Is the hypothesis true?
Phase 8 – Further research
- 8.1 If needed form a new goal/issue
- 8.2 Define and run advanced baseline tests for future analysis
- 8.3 If possible implement a continues approach to monitor the key metrics
The 8PP itself will change from time to time because performance tuning/analysis/testing will evolve.
Go docker Kitematic!