WordPress » Applications of Open Source CMS (3) - eWEEK: Open Source in the Enterprise imag1
 
 
Applications of Open Source CMS (3) - eWEEK: Open Source in the Enterprise

Jim Rapoza, Director of eWEEK Labs published their evaluation results on chosen stacks of portals and enterprise CMSes:

eWEEK Labs Bakeoff: Open Source Versus .Net Stacks

Pure Open Source:

  • LAMP - XOOPS on Linux, Apache, MySQL and PHP
  • Linux J2EE - Liferay on Cent-OS Linux, Apache, Tomcat, H2 Database
  • Linux JBoss - JBoss on CentOS with MySQL
  • Linux Python - Zope/Plone on SUSE Enterprise Linux

Mixing Windows and open source:

  • XOOPS on Windows Server 2003, Apache, MySQL, PHP
  • Plone on Windows Server 2003 R2
  • JBoss and MySQL on Windows Server 2003



Review: Open-source and .Net zealots can both take away positives from eWEEK Labs testing of various application stacks, but a mix-and-match approach wins the day. Bottom line: Open source and .Net better learn to play nice.

Can your organization’s IT stack stand up to the burdens placed upon it? Do the components in your IT stack provide the best possible performance? Do you have a choice? It’s time to find out.

There are all kinds of stacks out there, from network stacks to code stacks. But in recent years, the stacks that have been getting the most attention are those that are referred to—somewhat broadly—as IT stacks. Essentially, an IT stack consists of a server operating system, a Web server, a database, and a scripting or development language.

Of course, IT stacks deserve all the attention they get. After all, that grouping of applications is the core base that most Web-based enterprise applications run on—from portals to enterprise content management systems to CRM (customer relationship management) and ERP (enterprise resource planning) platforms.

Further, as companies move more aggressively into SOAs (service-oriented architectures) and other service-based systems, their IT stacks will play huge roles in determining ongoing service strategies.

Probably the two best-known stacks are Microsoft’s .Net and the open-source LAMP.

The .Net stack typically consists of a Windows Server operating system, the IIS (Internet Information Services) Web server, SQL Server database and Active Server Pages scripting language. The LAMP stack comprises the Linux server operating system, Apache Web server, MySQL database and one of the three “P” scripting languages (PHP, Python or Perl).

After these two stacks, the biggest—especially in enterprises—is the J2EE (Java 2 Platform, Enterprise Edition) stack. This stack is pretty flexible in terms of its components, but there is one constant: The development language has to be JSP (JavaServer Pages).

Of course, these aren’t the only three IT stacks out there. When you start mixing and matching applications, and introducing applications we haven’t even talked about, the choices are almost endless.

However, typically an IT stack isn’t chosen based on the quality of the applications therein but on issues such as history (”We’ve always been a Linux/Unix/Windows shop”), internal skill sets (”Our developers only know ASP/JSP/PHP”) or end products (”We really want to run Product X, which is .Net/Linux/Java”).

But what about the stacks themselves? How much does the choice of stack affect performance? Do stacks need to be pure in their configuration, or can a business get solid performance by mixing and matching among multiple stacks?

These are some of the questions eWeek Labs set out to answer a few months ago, when we began a series of tests to evaluate the makeup, performance and scalability of enterprise IT stacks.

We performed a series of load tests against eight mixes of IT stacks (admittedly, barely scratching the surface of potential stacks). These consisted of pure LAMP stacks, a pure .Net stack, J2EE on both Windows and Linux, and what we will refer to as a WAMP stack—basically, open-source components running on a Windows server.

Our tests show that all of the stacks perform well enough to handle most enterprise needs. Some did better than others, but no one was a leader in all categories. (Benchmark charts start at right.)

But there were some results that may prove surprising. Mix-and-match stacks tended to do fairly well in our tests—especially the stacks that took a nonstandard route when it came to the database.

Probably most surprising was the solid performance that came from the stacks that contained a mix of a Windows server and open-source components. Traditionally, these kinds of WAMP setups have been considered suitable only for development and testing purposes, not for production systems. But, based on the performance we saw in our tests, businesses should seriously consider the combo for their enterprise applications.

That’s not to say that pure-play Microsoft isn’t a good bet: Microsoft’s .Net stack performed very well in our tests, clearly showing the benefits of the tight integration among each of the stack components.

We hope our tests provide some perspective, but, more than anything else, we hope they inspire IT managers to perform the same kinds of tests themselves. No tests done in a third-party lab can tell you how a specific combination of servers and applications will run under your business-specific requirements and systems.

Our tests were labor- and time-intensive, but there was nothing too unusual about the equipment involved. Probably the biggest expense would be in finding a performance testing application to use, although, there are free, open-source testing tools out there that are capable of doing the job.

Testing the IT stacks

eWeek labs had several goals in mind when we started our tests of IT stacks.

First, we weren’t interested in doing an unrealistic stress test designed to see which IT stack broke first. What we wanted was to run each stack under a heavy but realistic and consistent level of traffic to get practical results that could be applied to most organizations’ computing environments.

To test a Web-facing platform like an IT stack, we needed a subject application to test with. We wanted to avoid the clean-room-like environment in which these kinds of tests are often run, so instead of building a test application and then porting it to the languages used in the evaluation, we decided to use real-world applications.

PointerMainsoft and IBM launches an effort to work together to extend the Linux ecosystem by helping Microsoft customers move to Linux. Click here to read more.

Specifically, we chose to use portal applications because they exist in pretty much every scripting language and we could create almost the exact same test script in each one.

We used portals we consider popular—Microsoft SharePoint Portal Server 2003 (built on ASP), XOOPS (PHP), Plone (Python), and Liferay and JBoss Portal (JSP).

On the server side, our test systems were AMD Opteron-based servers with SATA (Serial ATA) RAID drives and 2GB of RAM. A separate system was configured for each database.

Virtual load test clients were generated by an AMD Athlon 64-based workstation running Windows XP. Everything ran on a Gigabit Ethernet network in eWeek Labs. Each test was run multiple times to avoid test discrepancies and outlier results.

We considered several different tools for performing the load tests, including the open-source OpenSTA (see below). We looked hard at OpenSTA, as it would have made it much easier to share our test scripts and methodologies. However, while OpenSTA had all the requisite capabilities, its configuration and reporting limitations would have added to our testing time.

We eventually decided to use Borland’s SilkPerformer (formerly from Segue Software) to handle the actual test and reporting management. During the course of each approximately hourlong test, SilkPerformer ran 1,000 virtual clients against the stack applications.

To test the IT stacks, we recorded a script doing basic tasks that could be repeated in every one of the portals. The tasks included loading an identical page from each portal, loading a members page and general portal surfing. We opted to use open, rather than user-authenticated, pages because we did not want the process to turn into a test of authentication systems.

Among the many results generated from the tests, the ones we have chosen to publish focus on performance averages. These include average transactions per second, average throughput per second, average hits per second, average page download time and average document download time.

We should point out that the last two test averages include server busy time, as well as page download and document download attempts when the servers were more heavily loaded. The averages are a good barometer of how a stack setup will handle a long and heavy load, but they don’t represent the actual time it will take to download a page or document from these servers.

Like any large-scale benchmarking test, these IT stack tests are not without their weaknesses.

One could argue that we were testing the subject apps—the portals—and not the stacks themselves. Of course, one could also argue that tests using a custom-built subject application would be less a test of the stacks than a test of the porting skills of the programmers.

In addition, many of the platforms that we tested aren’t designed to run straight to users. For example, it is recommended that Plone be run behind clusters or Apache proxies for production environments.

In reality, all these products would run in heavily optimized environments in an enterprise. But the point was to test the stacks, not their ideal performance points, which is also why we didn’t tune or optimize any of the systems but ran them as close to default as possible.

The criticism we expect to hear most is of the stacks we left out—including commercial J2EE platforms, such as those available from BEA Systems, IBM, Oracle and Sun Microsystems, as well as the many other database and server platform permutations.

Hopefully, over time we—and readers who perform these tests and share their results—can address this last potential criticism. We plan to update our tests at blog.eweek.com/blogs/eweek_labs, and we invite you to also share your results there.

LAMP

The test we did that was closest to a pure LAMP stack ran on SUSE Enterprise Linux, Apache, MySQL and the XOOPS portal and content management system. We chose XOOPS because of its general popularity and high ranking among PHP-based portals on sourceforge.net.

In nearly every test we ran, this PHP-based LAMP configuration was a solid, middle-of-the-road performer.

PointerClick here to read about Dell’s direct support for two more key elements of the LAMP stack: MySQL and JBoss.

For example, we saw average throughput of 120.59KB and average hits per second of 24.15. Given that this was a pure-vanilla implementation with no tuning, these numbers are actually more impressive than they seem at first. Even the most ardent PHP fans will admit that PHP is not designed with performance in mind and will usually recommend clustering or performance add-ons such as those available from PHP vendor Zend Technologies.

This stack’s performance numbers suggest what many who have been using PHP for some time now (including some of the busiest blogs on the Web) know to be true—that a pure LAMP-based PHP system can easily handle enterprise-class traffic and loads.

Linux J2EE

We ran the liferay portal system on the Linux J2EE stack because of its popularity as a Java-based portal system and because of its somewhat unusual base configuration.

Liferay uses the Hypersonic SQL database engine (now the H2 Database), a Java-built database specifically designed to be very fast in Web environments. We ran Liferay on an Apache and Tomcat server infrastructure running on Cent-OS Linux.

Perhaps somewhat surprisingly, this configuration was among the best performers in our tests, with an excellent average throughput of 1.56M bps and the best average hits per second, 234.81.

eWEEK: IMG_2054eWEEK: IMG_2050eWEEK: IMG_2067
enterprise: DSC_4281.jpgenterprise: 36160enterprise: Space Shuttle Enterprise
Open Source: Primevere 2007Open Source: Primevere 2007Open Source: Primevere 2007

发表评论