At the Plone Performance Sprint¶
In Bristol helping make Plone go faster
I’m at the Plone Performance Sprint in Bristol, UK and I’m having a total blast.
First and foremost, it’s a great group. I’m just having too much fun working in person with all these people I’ve heretofore only known through the interwebs. The topics brainstorming session yielded a lot of great ideas and potential directions. I kinda resent having to choose between the instrumentation and load testing topics. :)
Florian has a lot of nifty ideas about instrumenting various levels of the Plone stack to get meaningful performance data. This is sorely needed. Theory and guessing in discussions about Plone performance is all well and good, but as we all know, measure, don’t guess. Florian’s instrumentation effort stands to get us good measurements of things ranging from pickle retrieval on the ZODB level all the way up to viewlet rendering time in the UI. I’m definitely looking forward to using whatever they produce. I can’t say much right now, but hopefully in the near future we’ll all be hearing from Mr. Bent. :)
In the end I’ve decided to go with the load testing topic. I’ve been wanting good baseline metrics for Plone performance for some time. Every now and then, a Plone rock star does some profiling and finds some code and applies a two line change that increases performance by some ridiculous factor. While certainly not the rockstar’s fault and not to denigrate the rockstar’s contribution, but this should never happen. Something should have alerted us that a hotspot was introduced very shortly after it was introduced. Our hope is that having a basic set of load tests run by buildbot, we’ll know when changes are made that impact performance. There are other goals you can read about on the wiki, but this is my primary goal. I hope to be a part of making work on Plone performance boringly predictable. Lets take the mystery out of it. :)
After the brainstorming and topic selection and such, we got a bit of a start on the load testing story. The first question was which tool to use, for which there were basically only two contenders: JMeter, and Funkload. I started out advocating for JMeter. I’d had a brief exposure to Funkload and had a bad experience with it, though I can’t remember why any more. I built a very intricate load test suite with JMeter after that. It did everything I needed it to do, and the capacity to slice and dice the reports and graphs using the UI is great, but everything else sucks. The UI sucks. Using regexps for the things JMeter uses them for sucks. Using Java sucks. Still I advocated for it because it does what it says it will do quite admirably.
At this sprint, many were also under the impression that we should use JMeter, but there are also a handful of Funkload lovers. Through the subsequent discussion and experimentation, I think most, if not all, of us in the JMeter camp have been thoroughly converted. Now that I understand Funkload better, I see that I give up nothing I really need and I gain… well, Python!
One idea I’m not sure I’ll have time to explore is integrating testbrowser and funkload. If I could make that work then I can write testbrowser doctests that can be run as load tests with full reporting options! /me swoons
Today we’ll be getting started with the actual load tests. Good times!
Updated on 12 December 2008
Imported from Plone on Mar 15, 2021. The date for this update is the last modified date in Plone.
Comments
comments powered by Disqus