ARCHIVED: Completed project: Testing Tools Initiative

This content has been archived, and is no longer maintained by Indiana University. Information here may no longer be accurate, and links may no longer be available or reliable.

Primary UITS contact: Craig Spanburg

Completed: January 17, 2014

Description: The desired state for software acquisition, development, and quality assurance at Indiana University is to have a prescriptive quality assurance methodology, utilizing both testing and application performance monitoring tools that support the full lifecycle of the application. Because tools alone are insufficient, we will also proactively change the development methodology to incorporate the tools and best practices into the everyday work of our software designers, developers, testers, and managers. The Testing Tools Initiative comprises the following elements:

  • Application performance monitoring (Dynatrace)
  • Load testing software (Neoload)
  • Developer tools (Confluence, JIRA, Greenhopper, FishEye, Crucible, Bamboo, and Clover)
  • Web analytics (Urchin)

Outcome: With the completion of this project, we will have tools and methodologies in place to:

  • Find application problems earlier in development lifecycles to reduce the overall cost of fixing a defect.
  • Alter the current paradigm, in which the Support Center is the first line of business that knows about a problem, through standardized alerting procedures in our production environments using application performance monitoring along with KPIs and alerting that directly notify the software professionals when systems are experiencing problems.
  • Reduce the number of defects that cause outages, partial outages, or improper behavior of application releases that make it into production.
  • Decrease the resolution time for a defect that makes it into production through greater visibility into the applications at runtime, which will isolate problems and bottlenecks.

Milestones and status:

  • January-May 2010: Gather requirements, evaluate tools, and prepare proposal Completed
  • May-July 2010: Negotiate prices and terms with vendors. Finalize and submit the proposal Completed
  • July 27, 2010: Approval received to proceed Completed
  • August 2010: Provision the application performance measurement environment and install Dynatrace software Completed
    • Oncourse, PeopleSoft, WCMS, Kuali Rice, Coeus, Confluence, and JIRA are using the application performance environment. It is available for use for other applications as needed.
  • August 23-September 10, 2010: Dynatrace training and engagement Completed
  • September-October 2010: Provision the load testing environment and install Neoload Completed
    • Oncourse, PeopleSoft, OneStart, and FootPrints are using the load testing environment. It is available for use for other applications as needed.
  • September 21-22, 2010: Neoload training Completed
  • October 2010-January 2011: Provision developer tool environments and install software Completed
    • The continuous integration tool environment has been set up and Bamboo installed. ESI became the first adopter and the environment became ready for use by all other groups in January.
    • The code review and change inspection tool environment has been set up and Crucible and FishEye installed. KFS became the first adopter and the environment became ready for use by other groups in January.
    • The code coverage tool Clover is a plug-in that depends on the other tools being in place. Developers began use in January.
  • November 2010: The Oncourse team uses Dynatrace and Neoload during the AIX to Linux database migration testing. The tests indicate the Linux environment was outperforming the production AIX environment giving the team confidence going into the migration. Post-migration analysis of the production environment confirmed what the Testing Tools indicated during test. Completed
  • November-December 2010: Provision web analytics environment and install Urchin software Completed
    • The virtual machines for the environment have been provisioned and the software is being installed. The pilot of this new service began in January.
  • January-December 2011: Conduct an Urchin pilot project to investigate features and establish production requirements and processes In progress
    • Integration with existing web log processing resources explored.
    • Public Affairs and Government Relations becomes pilot participant.
    • Pages tagged and data being sent to Urchin server to compare with Google Analytics and standard web log data.
  • February 22-23 2011: Conduct advanced Neoload training for load testing tool users Completed
  • March 21-25 and March 28-April 1 2011: Conduct beginner and advanced dynaTrace training for application performance measurement tool users Completed
  • April 2011: Auxiliary Services begins use of dynaTrace and Neoload to measure and test their applications. Completed
  • April-May 2011: The Kuali Coeus and Enterprise System Integration teams use dynaTrace to identify a memory leak during KC/RICE testing. Completed
  • May 2011: The FootPrints team uses Neoload to verify it with the next version of FootPrints. Completed
  • June-August 2011: Explored options for application performance measurement and monitoring of production enterprise systems to provide service owners better real-time performance information. Decided to wait for enhancements coming in dynaTrace 4 at the end of 2011 before beginning a large effort to meet these needs. Completed
  • September 2011: Successfully implement Shibboleth for Confluence in the staging environment. The use of Shibboleth will allow the use of Confluence for development projects with partners outside of IU. Completed
  • October 2011: Began working with the data warehouse team to provide load testing and application performance measurement in support of their Business Intelligence and Reporting Tools upgrade testing. Completed
  • November 2011: Assisted several teams with load testing and application performance measurement in support of Oracle upgrade testing. Completed
  • December 2011: Refocused efforts on application performance monitoring of production enterprise systems, in order to create dashboards for the production enterprise systems by the end of 2012. Completed
  • February 2012: Completed upgrade of the application performance monitoring environment to prepare for increased production monitoring and dashboard creation. Completed
  • March-May 2012: Met with all ES development teams to ensure they were aware of all the tools available, and to discuss their plans for adoption. Completed
  • May 2012: Completed upgrade of the load testing environment. Completed
  • May-June 2012: Implemented Shibboleth for the IU Knowledge Commons instance of JIRA. Completed
  • July-September 2012: Prepared, tested and completed major upgrades to Confluence and JIRA. Completed
  • August 2012: Reviewed progress on the application performance monitoring website (combines architecture diagrams with dynaTrace and Big Brother dashboards) with ESA and two development teams, and began incorporating suggested improvements. Completed
  • September 2012: Conducted InfoShare for all UITS staff on the tools available, with examples of how they're used, as well as what's been learned by using them. Completed
  • October 2012: Completed the major upgraded of Bamboo, the continuous integration tool, from 3.4.5 to 4.0.2. Completed a major upgrade of our application performance testing environment from dynaTrace 4.1 to 4.2. Upgraded our Fisheye license from 100 users to unlimited users to adapt to the growing use of this code change inspection tool. Began addressing incoming email issues mounting in Jira, our software issue tracking system. Completed
  • November 2012: Began supporting KFS batch performance testing with dynaTrace. The SIS teams began the use of Neoload for performance testing of PeopleTools and other SIS updates, while monitoring performance with dynaTrace. Continued addressing incoming email issues in Jira. Completed
  • December 2012: Began supporting KFS application performance testing with Neoload and dynaTrace. Upgraded Gliffy, the charting and graphing plug-in for Confluence. Completed addressing the incoming email issues with Jira. Completed
  • January-March 2013: Continue support of KFS batch and application performance testing with dynaTrace and Neoload. Perform the major upgrade of the production dynaTrace environment to 4.2. Perform the major upgrade of Confluence from 3.5.2 to 4.X. Implement the Shibboleth federated identity solution for Enterprise Confluence. Move the application performance monitoring website (combines architecture diagrams with dynaTrace and Big Brother dashboards) from ESA to WebTech servers, and incorporate suggested improvements. Completed
  • Early 2013: Proposal to continue use of Testing Tools approved; project became a production service.

Comment process: Email Craig Spanburg.

Benefits: This project will allow us to:

  • Identify problems earlier in the software lifecycle
  • Alert key staff of problems earlier and provide better data when problems occur
  • Reduce the number of defects that make it into production
  • Decrease the time it takes to fix defects that do make it into production

Primary client: UITS Enterprise Software and Enterprise Infrastructure; the tools will be available to all IU units once they are in production.

Client impact: Developers will produce higher quality software. If problems still occur, they will be addressed more effectively. The tools will be available to all IU units once they are in production.

Project team: UITS Enterprise Web Technical Services is managing the project and environments. Team members include UITS development and infrastructure staff, who will develop best practices for use of the tools.

Governance: The UITS Enterprise Software and Enterprise Infrastructure managers and development leads will assist with implementing best practices.

This is document bafs in the Knowledge Base.
Last modified on 2018-01-18 16:19:53.