Empirical Software Engineering

Initial Results

Unfortunately, we underestimated the time that fully reviewing the Drupal 6.x release notes would take, but by limiting the scope to the beta and release candidate versions that led up to the official 6.0 release, we found several types of information we want to explore in more detail.

Release Schedule

The overall release schedule is interesting.  In reviewing the development team’s notes for each release, we identified several notes about the number of bugs fixed and the general activities in each release.  For example:

The release schedule itself is centered around a series of beta releases to a small set of testers and module developers/upgraders followed by a series of release candidates to the larger testing community.  For Drupal 6.0, this resulted in the following:

  • Beta 1 (9/15/2007) – Noted as covering 8 months of development to get to this point
  • Beta 2 (10/17/2007) – bug fixes + security fixes from Drupal 5.3 and 4.7.8
  • Beta 3 (11/21/2007) – 180+ fixes
  • Beta 4 (12/5/2007) – 80+ fixes
  • RC 1 (12/20/2007)
  • RC2 (1/10/2008) – security fixes from 5.6
  • RC3 (1/30/2008)
  • R 6.0 Official Release (2/13/2008) – Noted as covering 1 year of development and 1600+ resolved issues.

New Features

The beta 1 release notes shown below were used to capture the core set of new/heavily modified features for Drupal 6 and the reasons for the changes.

  • Installer
  • Langauge Support
  • OpenId
  • Actions and Triggers
  • Update Status
  • Menu System
  • Theming
  • Book and Forum Changes

Reasons for Change

At a high-level, we opted to discuss each new feature in terms of its “reason for change” instead of its “requirements source” after recognizing that it allowed for finer granularity than a simple “source” code.  For example, many features fall under the category of “user requests” with no formal requirements; however, there is often some information available regarding the reason for the request (e.g. simplicity, performance, usability, etc.).  The full list of reasons identified and the features they map to are being tracked via a shared spreadsheet.

 

Next Steps

The results thus far introduced several new questions:

  1. What type of bugs were fixed in each beta release and release candidate?
  2. Does the bug density by module within Drupal core correlate to any specific new features/reason for change?

Based on these questions, a second round of open coding is underway to gather and analyze more information about the specific impacts of the new features.

While we still want to do our first round of open coding as individuals before coming together to compare notes, we took Dr. Ludi’s suggestion to do a bit of coding as a team just to make sure we’re on the same page.  The results were interesting.  We started with the release notes for the 6.x versions (http://drupal.org/node/3060/release?api_version%5B%5D=87).  Beginning with the first versions (at the end of the list), we saw that they actually referred out to more detailed release notes pages:

From here, we looked at the new features listed in the initial 6 beta release (http://drupal.org/drupal-6.0-beta1).  A sample of the coding results for this is shown below.

 

Like many OSSD projects, Drupal uses a single issue management system to track both bugs and new features (http://drupal.org/project/issues/drupal?categories=All). While this provides a convenient, single location to gather information about the project, it does not offer a consolidated view of the system requirements and their current implementation status. In order to develop an understanding of the relationships between requirements sources and the quality of Drupal, we undertook an iterative approach to reviewing and coding the available information. While the three types of coding used by Corbin and Strauss [6] may be used independently, we found them to be a useful guideline for an iterative approach to refining our results.

Individual Open Coding

As T. Gorschek and A. M. Davis [2] note, the effects of software requirements are often not seen until subsequent versions. With this is mind, we began our analysis by open coding [6] of the Drupal Core v6.x release notes [11]. This phase of our data collection and analysis was done with no restrictions other than a common goal to focus on the source of requirements for new features in Drupal (i.e. not coding error fixes) and any quality information that we could trace to a requirements issue. While the release notes were our primary focal point, further issue/bug specific information was collected as needing by drilling into issue details.

Code Consolidation and Axial Coding

Having completed a first pass of coding, we combined our results to create a master version of the coded release notes. Difference of opinion or terminology were resolved by revisting the literature review or collecting further clarifying data.

Selective Coding

Finally, the combined and normalized coding of the Drupal r6.x release notes was re-reviewed to segment identified topics based on the following general categories:
Requirements source types
Quality indicators
Relationships between specific sources and specific increases/decreases in quality
Overall trends and patterns regarding requirements sources and overall project quality.

Drupal is a free and open source web-based content management system originally developed by Dries Buytaert. It is written in PHP and is distributed under the GNU General Public License (GPL). According to world wide web consortium 2% of the all the websites on the web are using drupal. It has got a very large community of developers and users, which consists of 17,421 developers and 808,003 registered users (drupal.org). It uses the following methods for connecting its communities (drupal.org community):

  • Online and Local groups
  • Events and Meetups
  • Chat (IRC)
  • Planet Drupal (An aggregate of blog posts by drupal community)
  • Community Spotlight
  • Commercial Support
  • Forums
  • Mailing Lists

As most of the open source projects drupal is also using bug tracking as a central place for managing requirements. So we will be getting the requirements from drupal issue tracker. We will focus only on the drupal core for our research. Drupal core consists of the basic content management features.

McCall (1977) and Boehm (1978) were the first people who did extensive work on identifying the software quality characteristics. Their work established the ground for most of the research done on software quality since then. Another such work done on software quality is the FURPS model developed by Grady and Craswell (1987) at HP. These three models provide the basis for ISO 9126-1 software quality model.
The quality is something that depends on the needs of prospective stakeholders. Therefore it is difficult to come up with an absolute universal metric for software quality (Boehm, 1978) But there are still some software quality characteristics that can be generalized.
According to Boehm (1978) the higher level software quality characteristics are extracted from the answers to following questions:

  1. How well can the software be used as it is?
  2. How easy it is to maintain the software?
  3. And, can it be used if the environment changes, or how much portable it is?

According to the following questions the higher level software quality is dependent on as-is utility, maintainability, and portability. These characteristics are divided into more specific quality characteristics. This creates a hierarchical structure as following:

  • As-is utility
    • Reliability
    • Efficiency
    • Usability
  • Maintainability
    • Testability
    • Understandability
    • Modifiability
  • Portability.
    • Device independence
    • Self-containment

McCall’s model is quite similar to Boehm’s model. He divides the software quality characteristics into three main categories. And further divided these main characteristics into other characteristics as following:

  • Product Revision
    • Maintainability
    • Flexibility
    • Testability
  • Product Transition
    • Portability
    • Reusability
    • Interoperability
  • Product Operations
    • Correctness
    • Reliability
    • Efficiency
    • Integrity
    • Usability

FURPS is another quite common software quality model in the industry. FURPS stands for, what it considers the important quality characteristics of software, which are as following (Yahaya, 2010):

  • Functionality
  • Usability
  • Reliability
  • Performance
  • Supportability

According to this model, functionality is the most important characteristic of software quality. The model was extended later by adding more quality attributes to it. Therefore it is now called FURPS+. “+” stands for all the other quality attributes, which are included in the two other quality models.
ISO 1926-1 is another software quality model developed in 1991, which is actually based on the quality models that we just mentioned (McCall, Boehm, and FURPS) (Yahya, 2010). It identifies six main quality characteristics as following:

  • Functionality
  • Reliability
  • Usability
  • Effeciency
  • Maintainability
  • Portability

As mentioned earlier, quality is something that will mostly depend on the needs of stakeholders. So we must consider the quality characteristics that are important for the project that we using for our research. These projects will naturally be striving to to develop their software with the qualities which are important for them. Drupal has its own set of quality characteristics that they want to see in their product. These characteristics are as following (Drupal Mission and Values, http://drupal.org/node/10250):

  • Product Level Quality
    • Flexibility
    • Simplicity
    • Utility
  • Code Level Quality
    • Modularity
    • Extensibility
    • Maintainability

The mentioned models provide us with all the quality characteristics to look for in a software. However they are only providing us with the list of characteristics that determines whether a software is good quality or not. It doesn’t provide any practical method for measuring these characteristics in software. For example it doesn’t provide us with the method to measure how much understandable or testable a software is. So we still need to find the methods for measuring these characteristics in a software.

Cristoph Treude’s summary of his own use of grounded theory was helpful in providing some context and understanding for the more formal descriptions of grounded theory (GT) (i.e. J. Corbin and A. Strauss, “Grounded Theory Research: Procedures, Canons, and Evaluative Criteria,” Qualitative sociology, vol. 13, no. 1, pp. 3-21, 1990.)  Among other things, he mentions how, although not officially following a GT approach, much of their previous research processes included a similar philosophy.

Corbin and Strauss describe a very detailed approach to conducting true grounded theory research in a formal, but “pragmatic” manner.  As an overall approach, it seems very very useful.  However, I’m concerned that given the timeframe for our current project and our complete lack of experience with GT, that attempting to follow all of the procedures and canons identified by Strauss and Corbin would create too much overhead.

However, there are a few concepts that I think apply very well to the current state of our research.  At this time, we have a general research goal of identifying patterns in the relationship of requirements’ sources to OSS quality and, ideally, identifying promising areas of future work.

Formal GT practices suggest (1) that data collection and data analysis should be tied together, (2) that the research process itself is important and (3) that hypotheses should evolve over time until they are verified, not simply proposed and verified/invalidated.

Using these concepts, I propose we adjust our original research proposal slightly to use a degree of data collection and GT analysis in order to develop more refined hypotheses and suggestions for future work:

  1. Identification of the measurable aspects of OSS projects that are a reflection of overall quality.
  2. Identification of a target project with enough data to allow easy data mining but not so much as to preclude completing a short pilot of our research methods.
  3. Qualitative analysis of the projectReview a set of project issue reports and code to determine the set of requirements sources involved.
  4. Review the same set of project issue reports for quality information.
  5. Grounded theoryAnalysis of the requirements sources and resulting quality data to identify:
    1. Relationships between specific sources and specific increases/decreases in quality
    2. Overall trends and patterns regarding requirements sources and overall project quality.
  6. Develop a more detailed proposal for future work that discusses:
    1. The original goals of our work
    2. The collected quality metrics: which were collected and which proved useful.
    3. The identified requirements sources: what are they, how many of each, how they were identified, etc.
    4. Any identified relationships/patterns when looking at requirements sources and quality measures together.
    5. Suggestions for future work

Further reading shows that even my proposed changes above, are an incorrect approach to leveraging grounded theory.  In [S. Adolph, W. Hall, and P. Kruchten, “A Methodological Leg to Stand On : Lessons Learned Using Grounded Theory to Study Software Development,” in Proceedings of the 2008 conference of the center for advanced studies on collaborative research: meeting of minds (CASCON  ’08), 2008, pp. 1-13.], the authors discuss how their use and understanding of grounded theory changed over the course of three incremental studies.  They began (as it turns out many others do) with a very similar interpretation to what I proposed above:  collect data and incrementally analyze it.  It turns out even this is missing a bit of the point for grounded theory.  The idea is to truly do data collection in parallel with analysis so that the theory truly evolves in an organized way.  To interview/survey/collect data/etc., incrementally code and analyze it, and then pose a new set of research questions still focuses on a traditional propose/verify approach.  The goal should be to start with a broad research question so as to allow for true growth of the theory in any direction that may arise.

With this in mind, “steps” 3 and 4 above really need to include data analysis at the same time. The coding for the requirements sources and the quality measures used will certainly be based on the results of the literature review from steps 1 and 2, but we can’t actually decide on the possible source categories or quality measures until we (a) start coding the data and (b) start analyzing and seeing what quality questions we have.

Because we’ll need to be slowly gathering more and more data, we need to know in advance how to do some basic queries across the available dataset.  If we have some way to “program” or define a query and then run it over a set of issues and be able to add that data to results we already have (i.e. via an issue key to tie everything together), that woud be ideal.

Drupal

Posted on: April 17, 2012

Khalid is looking into using Drupal as our target project.  Dr. Ludi said that sounds promising.  Although it is large, she said we can probably focus on a single subsystem (e.g. the core) and a single release.  She suggests release 6 since it’s stable.

Follow

Get every new post delivered to your Inbox.