{"ID":78536,"post_author":"9203512","post_date":"2019-01-04 17:48:47","post_date_gmt":"0000-00-00 00:00:00","post_content":"","post_title":"Software Development for Medical Devices Using Continuous Integration: A Brief Introduction","post_excerpt":"","post_status":"draft","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"","to_ping":"","pinged":"","post_modified":"2019-01-04 17:48:47","post_modified_gmt":"2019-01-04 22:48:47","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.limsforum.com\/?post_type=ebook&#038;p=78536","menu_order":0,"post_type":"ebook","post_mime_type":"","comment_count":"0","filter":"","_ebook_metadata":{"enabled":"on","private":"0","guid":"5EA0EF41-76B3-4614-B076-00D29C4282B3","title":"Software Development for Medical Devices Using Continuous Integration: A Brief Introduction ","subtitle":"","cover_theme":"nico_6","cover_image":"https:\/\/www.limsforum.com\/wp-content\/plugins\/rdp-ebook-builder\/pl\/cover.php?cover_style=nico_6&subtitle=&editor=Shawn+Douglas+%28Admin%29&title=Software+Development+for+Medical+Devices+Using+Continuous+Integration%3A+A+Brief+Introduction+&title_image=&publisher=Shawn+Douglas+%28Admin%29","editor":"Shawn Douglas (Admin)","publisher":"Shawn Douglas (Admin)","author_id":"9203512","image_url":"","items":{"c52457e4a7209968c2325bdf3bcebdb3_type":"article","c52457e4a7209968c2325bdf3bcebdb3_title":"Practical application of Agile","c52457e4a7209968c2325bdf3bcebdb3_url":"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Practical_Application_of_Agile","c52457e4a7209968c2325bdf3bcebdb3_plaintext":"\n\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\n\t\t\t\tLII:Medical Device Software Development with Continuous Integration\/Practical Application of Agile\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\tFrom LIMSWiki\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\tJump to: navigation, search\n\n\t\t\t\t\t\n\t\t\t\t\t-----Return to the beginning of this guide-----\nTicketing system as a trigger for code peer review \nSomewhat recently I was thinking about what ticket status might be appropriate when using issue tracking for all tasks from functional requirements to documentation to defect tracking. It got me thinking about the need for peer reviews of code and how tedious these reviews can be. It turns out there is at least one plugin for Trac that includes hooks for annotation of code for the sake of peer review. It does not, however, appear to include any kind of formal sign-off capability.\nI started thinking that it would be nice to have a plugin for peer reviews (for Trac or Redmine or whatever). Used wisely, however, defining our workflow in a manner that makes the peer review process an integral part of it, we can probably simplify things. Do we really need a plugin, or can we simply use an \"In-Review\" status to achieve the same thing? I suppose the answer to this depends on how strict you want to be.\nHere\u2019s what I\u2019m thinking with regard to the history of a ticket (or issue, task, work item, or whatever we choose to call it):\n\n New\n In-Progress\n Resolved (or, if we determine that a ticket should not be completed, we have alternatives, such as deferred, rejected, duplicate, etc.)\n In-Review\n Closed\nWith a setup such as this, we can use the \"Resolved\" status as an indicator that an Issue has been completed, but it is not yet ready to be closed. Tickets are only closed when appropriate peer review actions have been taken. Who determines what these actions are? That is up to the project manager (or the team lead), and it is enforced by proper routing of the ticket. Easy individual responsible for peer review is assigned the ticket. Seeing the \"In-Review\" status, this colleague reviews the code changes (observing the changeset that is attached to the ticket) and makes comments (in the ticket notes).\nI know this sounds like a bit of legwork, but I see a few major benefits of an approach like this:\n\nTracing - We now have an audit log of all peer review comments. Using our ticket system with configuration management integration, tickets, changesets and review comments are linked together and not lost in some email thread or document somewhere.\nTime Savings \u2013 Anyone who has ever sat through a peer review (and I\u2019m guessing most project managers and developers have) knows how insanely time consuming they can be. Because nobody ever seems to have time, we attempt to save time by doing a large review of code; We wait for a long time, and then we are faced with a peer review involving an overwhelming amount of code. This leads to the next benefit\u2026\nBetter Focus of Reviews - I don\u2019t know about you, but I find that I am much better at reviewing a smaller amount of code or a single functional area than attempting to review thousands of lines of code all at once. We\u2019re all busy, and this isn\u2019t going to change. What happens when you find out that you have a peer review at the end of the week and you have to read through and mark up 5 class files? Do you set aside everything you are working on and do it? You try, but time is short, and so you hurry.\nCommunication - When I take the time to review a changeset, it benefits both team and the individual performing the review. Now I am better informed about what others are working on, where it is implemented, how it is implemented, etc. I don\u2019t have to go bug Joe the Developer to ask him if he finished such-and-such. I already know that he did because I reviewed his code.\nThis all assumes that our team follows good project management when it comes to the handling of issue tracking and version control. It means that we have to have well organized tickets and we have to commit changesets in some meaningful fashion. This should be a no-brainer.\n\nReferences \n\n\n\n\n\n\n\n\nSource: <a rel=\"external_link\" class=\"external\" href=\"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Practical_Application_of_Agile\">https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Practical_Application_of_Agile<\/a>\n\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\tNavigation menu\n\t\t\t\t\t\n\t\t\tViews\n\n\t\t\t\n\t\t\t\t\n\t\t\t\tLII\n\t\t\t\tDiscussion\n\t\t\t\tView source\n\t\t\t\tHistory\n\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\n\t\t\t\t\n\t\t\t\tPersonal tools\n\n\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t\t\tLog in\n\t\t\t\t\t\t\t\t\t\t\t\t\tRequest account\n\t\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\t\n\t\tNavigation\n\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\tMain page\n\t\t\t\t\t\t\t\t\t\t\tRecent changes\n\t\t\t\t\t\t\t\t\t\t\tRandom page\n\t\t\t\t\t\t\t\t\t\t\tHelp\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\n\t\t\t\n\t\t\tSearch\n\n\t\t\t\n\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t&#160;\n\t\t\t\t\t\t\n\t\t\t\t\n\n\t\t\t\t\t\t\t\n\t\t\n\t\t\t\n\t\t\tTools\n\n\t\t\t\n\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\tWhat links here\n\t\t\t\t\t\t\t\t\t\t\tRelated changes\n\t\t\t\t\t\t\t\t\t\t\tSpecial pages\n\t\t\t\t\t\t\t\t\t\t\tPermanent link\n\t\t\t\t\t\t\t\t\t\t\tPage information\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\n\t\t\n\t\tPrint\/export\n\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\tCreate a book\n\t\t\t\t\t\t\t\t\t\t\tDownload as PDF\n\t\t\t\t\t\t\t\t\t\t\tDownload as Plain text\n\t\t\t\t\t\t\t\t\t\t\tPrintable version\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\n\t\t\n\t\tSponsors\n\t\t\n\t\t\t \r\n\n\t\r\n\n\t\r\n\n\t\r\n\n\t\n\t\r\n\n \r\n\n\t\n\t\r\n\n \r\n\n\t\n\t\r\n\n\t\n\t\r\n\n\t\r\n\n\t\r\n\n\t\r\n\t\t\n\t\t\n\t\t\t\n\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t This page was last modified on 27 April 2016, at 23:23.\n\t\t\t\t\t\t\t\t\tThis page has been accessed 277 times.\n\t\t\t\t\t\t\t\t\tContent is available under a Creative Commons Attribution-ShareAlike 4.0 International License unless otherwise noted.\n\t\t\t\t\t\t\t\t\tPrivacy policy\n\t\t\t\t\t\t\t\t\tAbout LIMSWiki\n\t\t\t\t\t\t\t\t\tDisclaimers\n\t\t\t\t\t\t\t\n\t\t\n\t\t\n\t\t\n\n","c52457e4a7209968c2325bdf3bcebdb3_html":"<body class=\"mediawiki ltr sitedir-ltr ns-202 ns-subject page-LII_Medical_Device_Software_Development_with_Continuous_Integration_Practical_Application_of_Agile skin-monobook action-view\">\n<div id=\"rdp-ebb-globalWrapper\">\n\t\t<div id=\"rdp-ebb-column-content\">\n\t\t\t<div id=\"rdp-ebb-content\" class=\"mw-body\" role=\"main\">\n\t\t\t\t<a id=\"rdp-ebb-top\"><\/a>\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t<h1 id=\"rdp-ebb-firstHeading\" class=\"firstHeading\" lang=\"en\">LII:Medical Device Software Development with Continuous Integration\/Practical Application of Agile<\/h1>\n\t\t\t\t\n\t\t\t\t<div id=\"rdp-ebb-bodyContent\" class=\"mw-body-content\">\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\n\n\t\t\t\t\t<!-- start content -->\n\t\t\t\t\t<div id=\"rdp-ebb-mw-content-text\" lang=\"en\" dir=\"ltr\" class=\"mw-content-ltr\"><div align=\"center\">-----Return to <a href=\"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\" title=\"LII:Medical Device Software Development with Continuous Integration\" target=\"_blank\" class=\"wiki-link\" data-key=\"3cb3f79774b24a8afa847a72c56c4835\">the beginning<\/a> of this guide-----<\/div>\n<h2><span class=\"mw-headline\" id=\"Ticketing_system_as_a_trigger_for_code_peer_review\">Ticketing system as a trigger for code peer review<\/span><\/h2>\n<p>Somewhat recently I was thinking about what ticket status might be appropriate when using issue tracking for all tasks from functional requirements to documentation to defect tracking. It got me thinking about the need for peer reviews of code and how tedious these reviews can be. It turns out there is at least one plugin for Trac that includes hooks for annotation of code for the sake of peer review. It does not, however, appear to include any kind of formal sign-off capability.\n<\/p><p>I started thinking that it would be nice to have a plugin for peer reviews (for Trac or Redmine or whatever). Used wisely, however, defining our workflow in a manner that makes the peer review process an integral part of it, we can probably simplify things. Do we really need a plugin, or can we simply use an \"In-Review\" status to achieve the same thing? I suppose the answer to this depends on how strict you want to be.\n<\/p><p>Here\u2019s what I\u2019m thinking with regard to the history of a ticket (or issue, task, work item, or whatever we choose to call it):\n<\/p>\n<ul><li> New<\/li>\n<li> In-Progress<\/li>\n<li> Resolved (or, if we determine that a ticket should not be completed, we have alternatives, such as deferred, rejected, duplicate, etc.)<\/li>\n<li> In-Review<\/li>\n<li> Closed<\/li><\/ul>\n<p>With a setup such as this, we can use the \"Resolved\" status as an indicator that an Issue has been completed, but it is not yet ready to be closed. Tickets are only closed when appropriate peer review actions have been taken. Who determines what these actions are? That is up to the project manager (or the team lead), and it is enforced by proper routing of the ticket. Easy individual responsible for peer review is assigned the ticket. Seeing the \"In-Review\" status, this colleague reviews the code changes (observing the changeset that is attached to the ticket) and makes comments (in the ticket notes).\n<\/p><p>I know this sounds like a bit of legwork, but I see a few major benefits of an approach like this:\n<\/p>\n<ol><li><b>Tracing<\/b> - We now have an audit log of all peer review comments. Using our ticket system with configuration management integration, tickets, changesets and review comments are linked together and not lost in some email thread or document somewhere.<\/li>\n<li><b>Time Savings<\/b> \u2013 Anyone who has ever sat through a peer review (and I\u2019m guessing most project managers and developers have) knows how insanely time consuming they can be. Because nobody ever seems to have time, we attempt to save time by doing a large review of code; We wait for a long time, and then we are faced with a peer review involving an overwhelming amount of code. This leads to the next benefit\u2026<\/li>\n<li><b>Better Focus of Reviews<\/b> - I don\u2019t know about you, but I find that I am much better at reviewing a smaller amount of code or a single functional area than attempting to review thousands of lines of code all at once. We\u2019re all busy, and this isn\u2019t going to change. What happens when you find out that you have a peer review at the end of the week and you have to read through and mark up 5 class files? Do you set aside everything you are working on and do it? You try, but time is short, and so you hurry.<\/li>\n<li><b>Communication<\/b> - When I take the time to review a changeset, it benefits both team and the individual performing the review. Now I am better informed about what others are working on, where it is implemented, how it is implemented, etc. I don\u2019t have to go bug Joe the Developer to ask him if he finished such-and-such. I already know that he did because I reviewed his code.<\/li><\/ol>\n<p>This all assumes that our team follows good project management when it comes to the handling of issue tracking and version control. It means that we have to have well organized tickets and we have to commit changesets in some meaningful fashion. This should be a no-brainer.\n<\/p>\n<h2><span class=\"mw-headline\" id=\"References\">References<\/span><\/h2>\n<div class=\"reflist\" style=\"list-style-type: decimal;\">\n<\/div>\n\n<!-- \nNewPP limit report\nCached time: 20190104224854\nCache expiry: 86400\nDynamic content: false\nCPU time usage: 0.011 seconds\nReal time usage: 0.014 seconds\nPreprocessor visited node count: 37\/1000000\nPreprocessor generated node count: 304\/1000000\nPost\u2010expand include size: 135\/2097152 bytes\nTemplate argument size: 0\/2097152 bytes\nHighest expansion depth: 4\/40\nExpensive parser function count: 0\/100\n-->\n\n<!-- \nTransclusion expansion time report (%,ms,calls,template)\n100.00% 4.618 1 - Template:Reflist\n100.00% 4.618 1 - -total\n-->\n\n<!-- Saved in parser cache with key limswiki:pcache:idhash:8687-0!*!0!!*!*!* and timestamp 20190104224854 and revision id 25247\n -->\n<\/div><div class=\"printfooter\">Source: <a rel=\"external_link\" class=\"external\" href=\"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Practical_Application_of_Agile\">https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Practical_Application_of_Agile<\/a><\/div>\n\t\t\t\t\t\t\t\t\t\t<!-- end content -->\n\t\t\t\t\t\t\t\t\t\t<div class=\"visualClear\"><\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t\t<!-- end of the left (by default at least) column -->\n\t\t<div class=\"visualClear\"><\/div>\n\t\t\t\t\t\n\t\t<\/div>\n\t\t\n\n<\/body>","c52457e4a7209968c2325bdf3bcebdb3_images":[],"c52457e4a7209968c2325bdf3bcebdb3_timestamp":1546642134,"fe175ec1d1846bbf56d90860f4b8b11a_type":"article","fe175ec1d1846bbf56d90860f4b8b11a_title":"Validation","fe175ec1d1846bbf56d90860f4b8b11a_url":"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Validation","fe175ec1d1846bbf56d90860f4b8b11a_plaintext":"\n\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\n\t\t\t\tLII:Medical Device Software Development with Continuous Integration\/Validation\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\tFrom LIMSWiki\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\tJump to: navigation, search\n\n\t\t\t\t\t\n\t\t\t\t\t-----Return to the beginning of this guide-----\nContents\n\n1 Overview of unit tests \n\n1.1 Early attempts to automate functional testing \n1.2 Automating functional tests using unit test framework \n1.3 What is a good unit test? \n\n\n2 What is the value of unit testing? \n\n2.1 Immediate feedback within continuous integration: Developer confidence \n\n2.1.1 Easy refactoring \n2.1.2 Regression tests with every code change \n2.1.3 Concurrency tests \n2.1.4 Repeatable and traceable test results \n2.1.5 Regulated environment needs \n2.1.6 Document the approach \n2.1.7 Label and trace tests \n\n\n\n\n3 The traceability matrix \n\n3.1 Do we still need manual tests? \n\n\n4 Notes \n5 References \n\n\n\nOverview of unit tests \nIn computer programming, unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure. In object-oriented programming a unit is usually a method. Unit tests are created by programmers or occasionally by white box testers during the development process. In the world of Java, we have a number of popular options for the implementation of unit tests, with JUnit and TestNG being, arguably, the most popular. Examples provided in this article will use TestNG syntax and annotations.\nTraditionally (and by traditionally, I mean in their relatively brief history), unit tests have been thought of as very simple tests to validate basic inputs and outputs of a software method. While this can be true, and such simple tests can serve of some amount of value, it is possible to achieve much more with unit tests. In fact, it is not only possible but recommended that we implement much of our user acceptance, functional, and possibly even some non-functional tests within a unit test framework.\nTo further enhance quality, we can augment the acceptance with unit tests.[1] While I personally have never been a fan of test-driven development (I feel that the assumptions required by test-driven development do not allow for a true iterative approach), I do believe that creation of unit tests in parallel with development leads to much more quality software. In the world of Agile, this means that no functional requirement (or user story) is considered fully implemented without a corresponding unit test. This strict view of unit tests may be a bit extreme, but it is not without merit.\nThe first unit test a developer may ever write is likely so simple that it's nearly useless. It may go something like this.\nGiven a method:\n\npublic int doSomething (int a, int b) { \u2026 return c;}\nA simple unit test may look something like this:\n\npublic class MyUnitTests { @Test public void testDoSomething() { assertEquals(doSomething(1, 2), expectedResult); }}\nGiven a very simple method the developer is able to assert that, essentially, a + b = c. This is easy to write, and there is little overheard involved, but it really isn\u2019t a very useful unit test.\n\nEarly attempts to automate functional testing \nLong ago I was involved with a project in which management invested a significant amount of time and training in an attempt to implement automated testing. The chosen tool was Rational Robot (now an IBM product). The idea behind tools such as Robot was that a test creator could record test macros, note points of verification, and replay the macros later with test results noted. Tools such as Rational Robot and WinRunner attempted to replace the human tester with record scripts. These automated scripts could be written using a scripting language or, more commonly, by recording mouse movements, clicks, and keyboard actions. In this regard, these tools of test automation allowed black-box testing through a user interface.\nIn this over-simplified view of automated testing, there were simply too many logistical problems with test implementation to make them practical. Any minor changes to the user interface would result in a broken test script. Those responsible for maintaining these automated scripts often found themselves spending more time maintaining the tests than using them for actual application testing.\nRational Robot and tools like it are alive and well, but I refer to them in the past tense because such tools, in my experience, have proven themselves to be a failure. I say this because I have personally spent significant amounts of time creating automated scripts in such tools, and I have been frustrated to learn later that they would not be used because of the substantial amount of interface code that changes as a project progresses. Such changes are absolutely expected, and yet, a recorded automated test does not lend itself well to an iterative development environment or an ongoing project.\n\nAutomating functional tests using unit test framework \nMost software projects, especially in any kind of Agile environment, undergo frequent changes and refactoring. If the traditional single-flow waterfall model worked, recorded test scripts such as those noted previously would probably work just fine as well, albeit with little benefit.\nBut it should be well known by know that the traditional single-flow waterfall model has failed, and we live in an iterative\/Agile world. As such, our automated tests must be equally equipped for ongoing change. And because the functional unit tests are closely related to requirements at both a white-box and black-box level, developers, not testers, have an integral role in the creation of automated tests.\nTo achieve this level of unit testing, a test framework must be in place. This requires a bit of up-front effort, and the details of creating such a framework go well beyond the scope of this article. Additionally, the needs of a test framework will vary depending on the project.\nTest fixtures become an important part of complex functional unit testing. A test fixture is a class that incorporates all of the setup necessary for running such unit tests. It provides methods that can create common objects (for example, test servers and mock interfaces). The details included in a test fixture are specific to each project, but some common methods include test setup, simulation, and mock object creation and destruction, as well as declaration of any common functionality to be used across unit tests. To provide further detail on test fixture creation would require much more detail than can be provided here.\nGiven what may seem like extreme overhead in the creation of complex unit tests, we may begin to question the value. There is, no doubt, a significant up-front cost to the creation of a versatile and useful unit test framework (including a test fixture, which includes all the necessary objects and setup needed to simulate a running environment for the sake of testing). And given the fact that manual function and user acceptance testing remains a project necessity, it seems like there may be an overlap of effort.\nBut this is not the case.\nWith a little up-front creation of a solid unit test framework, we can make efforts to create unit tests simple. We can even go as far as requiring a unit test for any functional requirement implementation prior to allowing that requirement (or ticket) to be considered complete. Furthermore, as we discover potential functionality problems, we have the opportunity to introduce a new test right then and there!\nThe hardware system, software program, and general quality assurance system controls discussed below are essential in the automated manufacture of medical devices. The systematic validation of software and associated equipment will assure compliance with the QS regulation and reduce confusion, increase employee morale, reduce costs, and improve quality. Further, proper validation will smooth the integration of automated production and quality assurance equipment into manufacturing operations. Medical devices and the manufacturing processes used to produce them vary from the simple to the very complex. Thus, the QS regulation needs to be and is a flexible quality system. This flexibility is increasingly valuable as more device manufacturers move to automated production, test\/inspection, and record-keeping systems.[2]\n\nWhat is a good unit test? \nIn his book Safe and Sound Software, Thomas H. Faris describes the unit test as such:\n\nSoftware testing may occur on software modules or units as they are completed. Unit testing is effective for testing units as they are completed, when other units are components have not yet been completed. Testing still remains to be completed to ensure that the application will work as intended when all software units or components are executed together.[3]\nThis is a start, but unit tests can achieve so much more! Faris goes on to describe a number of different categories of software[3]:\n\n Black box test\n Unit test\n Integration test\n System test\n Load test\n Regression test\n Requirements-based test\n Code-based test\n Risk-based test\n Clinical test\nTraditionally this may be considered a fair list. Used wisely, and with the proper frame work, however, we can perform black box, integration, system, load, regression, requirements, code-based, risk-based, and clinical tests with efficient unit tests that simulate a true production environment. The purpose of this article is not to go into the technical details of how (to explain unit test frameworks, fixtures, mock objects and simulations would require much more space). Rather, I simply want to point out the benefits that result. To achieve these benefits, your software team will need to develop a deep understand of unit tests. It will take some time, but it will be time very well spent.\nIt\u2019s a good idea to have unit tests that go above and beyond what we traditionally think of as unit tests, and go several steps further, automating functional testing. This is another one of those areas where team members often (incorrectly) feel that there is not sufficient time to do all the work. As Harris goes on to state:\n\nSoftware testing and defect resolution are very time-consuming, often draining more than one-half of all effort undertaken by a software organization ... Testing need not wait until the entire product is completed; iteratively designed and developed code may be tested as each iteration of code is completed. Prior to beginning of verification or validation, the project plan or other test plan document should discuss the overall strategy, including types of tests to be performed, specific functional tests to be performed, and a designation of test objectives to determine when the product is sufficiently prepared for release and distribution.[3]\nHarris is touching on something that is very important in our FDA-regulated environment, and this is the fact that we must document and describe our tests. For our unit tests to be useful we must provide documentation of what each test does (that is, what specifically it is testing) and what the results are. The beauty of unit tests and the tools available (incorporation into our continuous integration environment) is that this process is streamlined in a way that makes the traceability and re-creation of test conditions required for our 510(k) extremely easy!\nTo achieve all of this we will need to have a testing framework capable of application launch, simulations, mock objects, mock interfaces and temporary data persistence. This all sounds like much more overhead than it actually is, but fear not: the benefits far outweigh the costs.\n\nWhat is the value of unit testing? \nImmediate feedback within continuous integration: Developer confidence \nToo often we view testing as an activity that occurs only at specific times during software development. At worst, software testing takes place upon completion of development (which is when it is learned that development is nowhere near complete). In other more zealous environments, it may take place at the end of each iteration. We can do better! How about complex unit tests performing validation continuously, with each code change? It is possible to perform full regression tests with every single code change. It sounds like a significant amount of overhead, but it is not. The real cost to a project is not inattention to complex functional unit tests; the danger is that we put off testing until it is too late to react to a critical issue discovered during some predetermined testing phase.\nThe most effective way of killing a project is to organize it so that testing becomes an activity that is so critical to its success that we do not allow for the possibility that testing can do what it is supposed to do: discover a defect prior to go-live.\nAt its most basic level, a continuous integration build environment does just one thing: it runs whatever scripts we tell it to. To that end, it is important that the CI build execute unit tests and that a failure of any single unit test is considered a failure of the continuous integration build. The power of a tool such as Jenkins is that we can tell it to run whatever we want, log the outcome, keep build artifacts, run third-party evaluation tools, and report on results. With integration of our software version control system (e.g., Subversion, Git, Mercurial, CVS, etc.), we know the changeset relevant to a particular build. It can be configured to generate a build at whatever interval we want (nightly, hourly, every time there is a code commit, etc.). When a test fails, we know immediately what changeset was involved.\nPersonally, every time I do any code commit of significance, one of the first things I do is check the CI build for success. If I've broken the build, I get to work on correcting the problem (and if I cannot correct the problem quickly, I roll my changeset out so that the CI build continues to work until I\u2019ve fixed the issue).\n\nEasy refactoring \nAs a developer, refactoring can be a scary thing. Refactoring is perhaps the most effective way of introducing a serious defect while doing something that seems innocuous. With thorough unit tests performing a full regression test with each and every committed software changeset, however, a developer can have confidence that his or her simple code changes have not introduced a defect. We have continuous integration builds running our tests for many reasons, not the least of which is to alert developers to the possibility that their changes have broken the build.\nAs a developer I strive to avoid breaking the continuous integration build. When I do break it, however, I am very pleased to know that what was done to cause a problem has been discovered immediately. Correction of a defect becomes much more costly when its discovery is not noticed until the end of a development phase!\n\nRegression tests with every code change \nBy \"repeated\" I mean something different than repeatable. The fundamental benefit with repeated tests is the fact that a test can be executed many more times by automation than by a human tester. Sometimes, even without a related code change, and much to our surprise, we see a test suddenly fail where it succeeded numerous times before. What happened?\nThe most difficult software defects to fix (much less, find) are the ones that do not happen consistently. Database locking issues, memory issues, deadlock bugs, memory leaks, and race conditions can result in such defects. These defects are serious, but if we never detect them, how can we fix them?\nAs stated previously, it is imperative that have unit tests that go above and beyond what we traditionally think of as unit tests, going several steps further, automating functional testing). This is another one of those areas where team members often (incorrectly) feel that there is not sufficient time to deal with the creation of unit tests. Given a proper framework, however, creation of unit tests need not be overwhelming.\nAnother occasional issue has to do with misuse of the software version control system. Many developers know the frustration that can come with an accidental code change resulting from one developer stepping over the modifications of another. While this is a rare issue in a properly used version control environment, it does still happen, and unit tests can quickly reveal such a problem at build time.\n\nConcurrency tests \nConcurrency tests are tricky, and it is in concurrency testing that the repeated and rapid nature of functional unit tests can shine where human testers cannot. I personally have witnessed many occasions in which a CI build suddenly fails for no obvious reason. There was no code commit related to the particular point-of-failure, and yet a unit test that once succeeded suddenly fails? Why?\nThis can happen (and it does happen) because concurrency problems, by their very nature, are hit or miss. Sometimes they are so unlikely to occur that we never witness them during the course of normal testing. When a continuous integration environment runs concurrency tests dozens of times a day, however, we increase the likelihood of finding a hidden and menacing problem. Additionally, unit tests can simulate many concurrent users and processes in a way that even a team of human testers cannot.\n\nRepeatable and traceable test results \nThis is the key to making our unit tests adhere to the standards we have set forth in our quality system so that we may use them as a part of our submission (see the following section on Regulated Environment Needs). If we are going to put forth the effort, and since we already know that unit tests result in a quality improvement to our software, why wouldn\u2019t we want to include these test results?\nOur continuous integration server can and should be used to store our unit test results right alongside each and every build that it performs.\nThis is a benefit not only in the world of an FDA-regulated environment, of course. In any software project it can be difficult to recreate conditions under which a defect was discovered. With a CI build executing our build and test scripts under a known environment with a known set of files (the CI build tool pulls from the version control system), it is possible to execute the tests under exact and specific circumstances.\nMany of the benefits of functional unit testing listed above are gained only when unit tests are written alongside design and development (test-driven methodologies aside). It is imperative that the development team develop and observe test results while design and activities take place. This is of benefit to the quality assurance team as well, as Dean Leffingwell notes:\n\nA comprehensive unit test strategy prevents QA and test personnel from spending most of their time finding and reporting on code-level bugs and allows the team to move its focus to more system-level testing challenges. Indeed, for many agile teams, the addition of a comprehensive unit test strategy is a key pivot point in their move toward true agility \u2014 and one that delivers \"best bang for the buck\" in determining overall system quality.[4] Also, it is probably becoming clear that a key benefit of functional unit tests is the real-time feedback offered to the development team. Humble and Farley refer to the unit tests that are executed with each software change as \"commit tests.\"[5]\nCommit tests that run against every check-in provide us with timely feedback on problems with the latest build and on bugs in our application in the small.[5] Project unit tests, which should offer significant amount coverage (at least 80 percent), provide the team with built-in software change-commit acceptance criteria. If a developer causes the CI build to fail because of a code change, it is immediately known that the change involved does not meet minimum accepted criteria, and it requires urgent attention.\nHumble and Farley continue:\n\nCrucially, the development team must respond immediately to accepted test breakages that occur as part of the normal development process. They must decided if the breakage is a result of a regression that has been introduced, an intentional change in the behavior of the application, or a problem with the test. Then they must take appropriate action to get the automated acceptance test suite passing again.[5]\nRegulated environment needs \nPer 21 CFR Part 820.30 on design controls:\n\n(f) Design verification. Each manufacturer shall establish and maintain procedures for verifying the device design. Design verification shall confirm that the design output meets the design input requirements. The results of the design verification, including identification of the design, method(s), the date, and the individual(s) performing the verification, shall be documented in the design history file (DHF).[6]\nSimply put, our functional unit tests must be a part of our DHF, and we must document each test and test result (success or failure) as well as tie tests and outcomes to specific software releases. This is made extremely easy with a continuous integration environment in which builds and build outcomes (including test results) are stored on a server, labeled, and linked to from our DHF. Indeed, what is sometimes a tedious task when it comes to manual execution and documentation of test results becomes quite convenient.\nThe same is true of design validation:\n\n(g) Design validation. Each manufacturer shall establish and maintain procedures for validating the device design. Design validation shall be performed under defined operating conditions on initial production units, lots, or batches, or their equivalents. Design validation shall ensure that devices conform to defined user needs and intended uses and shall include testing of production units under actual or simulated use conditions. Design validation shall include software validation and risk analysis, where appropriate. The results of the design validation, including identification of the design, method(s), the date, and the individual(s) performing the validation, shall be documented in the DHF.[6]\nBecause our CI environment packages build and test conditions at a given point in time, we can successfully satisfy the requirements laid out by 21 CFR Part 820.30 (f) and (g) with very little effort. We simply allow our CI environment to do that which is does best, and that which a human tester may spend many hours attempting to do with accuracy.\n\nDocument the approach \nAs discussed, all these tests are indeed very helpful to the creation of good software. However, without a wise approach to incorporation of such tests in our FDA-regulated environment, they are of little use in any auditable capacity. It is necessary to document our approach to unit test usage and documentation within our standard operating procedures (SOPs) and work instructions, and this is to be documented in much the same way that we would document any manual verification and validation test activities.\nTo this end, it is necessary to make our unit tests and their outputs an integral part of our DHF. Each test must be traceable, and this means that unit tests are given unique identifiers. These unique identifiers are very easily assigned using an approach in which we organize tests in logical units (for example, by functional area) and label tests sequentially.\n\nLabel and trace tests \nAn approach that I have taken in the past is to assign some high-level numeric identifier and a secondary sub-identifier that is used for the specific test. For example, we may have the following functional areas: user session, audit log, data input, data output, and web user interface tests (these are very generic examples of functional areas, granted). Given such functional areas, I would label each test using test naming annotations, with the following high level identifiers:\n\n 1000: user session tests\n 2000: audit log tests\n 3000: data input tests\n 4000: data output tests\n 5000: web user interface tests\nWithin each test it is then necessary to go a step further, applying some sequential identifier to each test. For example, the user test package may include tests for functional requirements such as user login, user logout, session expiration, and a multiple-user login concurrency test. In such a scenario, we would label the tests as follows:\n\n 1000_010: user login\n 1000_020: user logout\n 1000_030: session expiration\n 1000_040: multiple concurrent user login\nUsing TestNG syntax, along with proper Javadoc comments, it is very easy to label and describe a test such that inclusion in our DHF is indeed very simple.\n\n\/** * Test basic user login and session creation with a valid user. * * @throws Exception *\/@Test(dependsOnMethods = {\"testActivePatientIntegrationDisabled\"}, groups = {\"TS0005_AUTO_VE1023\"})public void testActivePatientIntegrationEnabled() throws Exception { Fixture myApp new Fixture(); UserSession mySession = fixture.login(\u201ctest_user\u201d, \u201ctest_password\u201d); assertNotNull(mySession); asertTrue(mySession.active());}\nAny numbering we choose to use for these tests is fine, as long as we document our approach to test labeling in some project level document, for example a validation plan or master test plan. Such decisions are left to those who design and apply a quality system for the FDA-regulated project. As most of us know by now, the FDA doesn\u2019t tell us exactly how we are to do things; rather, we are simply told that we must create a good quality system, trace our requirements through design, incorporate the history in our DHF, and recreate build and test conditions.\nIf I make this all sound a little too easy, it is because I believe it is easy. Too often we view cGMP guidance as a terrible hindrance to productivity, but we are in control of making things as efficient as we can.\n\nThe traceability matrix \nA critical factor in making unit tests usable in an auditable manner is incorporating them into the traceability matrix. As with any test, requirements, design elements, and hazards must be traced to one another through use of the traceability matrix.\n\nThe project team must document traceability of requirements through specification and testing to ensure that all requirements have been tested and correctly implemented (product requirements traceability matrix).[3]\nWith each automated test labeled, we can use the built-in JUnit or TestNG funcationality (along with XSLT, if we so choose) to create output that is tied to the build number and changset and traceable within our trace matrix. The output of our tests (which are run during each continuous integration build) may be as follows:\n\nTEST NAME STATUSTS0005_AUTO_VE1022 PASSTS0005_AUTO_VE1023 PASS TS0005_AUTO_VE1024 FAILTS0005_AUTO_VE1025 SKIP\u2026\nNaturally, we hope that all the automated tests pass, but when they fail we need to record the failure. Its my opinion that placing all test outcomes in the DHF is not necessary. Rather, the DHF can point to the continuous integration build server, where automated test results are bundled alongside each build. Finally, at the end of a sprint or iteration, the appropriate test results for the final locked down build are captured in the DHF and traced appropriately per SOPs.\nOur SOPs and work instructions will require that we prove traceability of our tests and test results, whether manual or automated unit tests. Just as has always been done with the manual tests that we are familiar with, tests must be traced to software requirements, design specifications, hazards, and risks. The goal is simply to prove that we have tested that which we have designed and implemented, and in the case of automated tests, this is all very easy to achieve!\n\nDo we still need manual tests? \nYes! Absolutely! There are a number of reasons why manual tests are still, and always will be, required. Take for example installation qualification and environmental tests. Both manual and automated tests are valid and valuable, and neither should be considered a replacement for the other.\nI recall being a child in karate lessons. One day I came home from a lesson, very proud that I had learned to block a punch. \"Come at me with a punch,\" I said to my friend.\nDoing what I asked, he punched me right in the chest, and I failed to block the punch. This punch wasn't thrown the way I expected (the way we practiced in karate lessons).\n\"No, no, no! You\u2019re punching me the wrong way!\" I said. I only knew how to block one kind of punch, and when punched a different way, my block no longer worked. To me, this karate lesson highlights the difference between an exception and an error. Automated tests can provide error test coverage very well. But when thrown something unanticipated, they don\u2019t offer the creativity in and of themselves to find the issue.\nIt is up to us, developers and testers, to come up with creative punches to throw at our system. This is where manual testing allows a certain amount of \"creative\" punching that may not be considered during unit test development. Manual tests also lead to greater insight related to usability and user interaction issues.\nPerhaps even more importantly, manual tests give feedback on general application usability and user interaction. To this end, a defect that is discovered during manual testing should result in an automated test.\n\nNotes \nThe original author had anticipated writing about the following sub-topics but never did: test fixture, mock objects, avoiding the Singleton design pattern, in-memory DB, and in-memory servlet container.\n\nReferences \n\n\n\u2191 Leffingwell, D.&#32;(2011).&#32;Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise.&#32;Addison-Wesley Professional.&#32;p.&#160;61.&#32;ISBN&#160;9780321635846. &#160; \n\n\u2191 \"General Principles of Software Validation; Final Guidance for Industry and FDA Staff\".&#32;Food and Drug Administration.&#32;11 January 2002.&#32;http:\/\/www.fda.gov\/MedicalDevices\/DeviceRegulationandGuidance\/GuidanceDocuments\/ucm085281.htm .&#32;Retrieved 27 April 2016 . &#160; \n\n\u2191 3.0 3.1 3.2 3.3 Faris, T.H.&#32;(2006).&#32;Safe and Sound Software: Creating an Efficient and Effective Quality System for Software Medical Device Organizations.&#32;ASQ Quality Press.&#32;p.&#160;118\u2013123.&#32;ISBN&#160;0873896742. &#160; \n\n\u2191 Leffingwell, D.&#32;(2011).&#32;Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise.&#32;Addison-Wesley Professional.&#32;p.&#160;196.&#32;ISBN&#160;9780321635846. &#160; \n\n\u2191 5.0 5.1 5.2 Humble, J.; Farley, D.&#32;(2010).&#32;Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation.&#32;Addison-Wesley Professional.&#32;p.&#160;124.&#32;ISBN&#160;9780321601912. &#160; \n\n\u2191 6.0 6.1 \"Title 21--Food and Drugs, Part 820--Quality System Regulation, Sec. 820.30 Design controls\".&#32;CFR - Code of Federal Regulations Title 21.&#32;Food and Drug Administration.&#32;21 August 2015.&#32;https:\/\/www.accessdata.fda.gov\/scripts\/cdrh\/cfdocs\/cfCFR\/CFRSearch.cfm?fr=820.30 .&#32;Retrieved 27 April 2016 . &#160; \n\n\n\n\n\n\n\n\nSource: <a rel=\"external_link\" class=\"external\" href=\"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Validation\">https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Validation<\/a>\n\t\t\t\t\tCategory: Pages with syntax highlighting errors\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\tNavigation menu\n\t\t\t\t\t\n\t\t\tViews\n\n\t\t\t\n\t\t\t\t\n\t\t\t\tLII\n\t\t\t\tDiscussion\n\t\t\t\tView source\n\t\t\t\tHistory\n\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\n\t\t\t\t\n\t\t\t\tPersonal tools\n\n\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t\t\tLog in\n\t\t\t\t\t\t\t\t\t\t\t\t\tRequest account\n\t\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\t\n\t\tNavigation\n\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\tMain page\n\t\t\t\t\t\t\t\t\t\t\tRecent changes\n\t\t\t\t\t\t\t\t\t\t\tRandom page\n\t\t\t\t\t\t\t\t\t\t\tHelp\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\n\t\t\t\n\t\t\tSearch\n\n\t\t\t\n\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t&#160;\n\t\t\t\t\t\t\n\t\t\t\t\n\n\t\t\t\t\t\t\t\n\t\t\n\t\t\t\n\t\t\tTools\n\n\t\t\t\n\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\tWhat links here\n\t\t\t\t\t\t\t\t\t\t\tRelated changes\n\t\t\t\t\t\t\t\t\t\t\tSpecial pages\n\t\t\t\t\t\t\t\t\t\t\tPermanent link\n\t\t\t\t\t\t\t\t\t\t\tPage information\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\n\t\t\n\t\tPrint\/export\n\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\tCreate a book\n\t\t\t\t\t\t\t\t\t\t\tDownload as PDF\n\t\t\t\t\t\t\t\t\t\t\tDownload as Plain text\n\t\t\t\t\t\t\t\t\t\t\tPrintable version\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\n\t\t\n\t\tSponsors\n\t\t\n\t\t\t \r\n\n\t\r\n\n\t\r\n\n\t\r\n\n\t\n\t\r\n\n \r\n\n\t\n\t\r\n\n \r\n\n\t\n\t\r\n\n\t\n\t\r\n\n\t\r\n\n\t\r\n\n\t\r\n\t\t\n\t\t\n\t\t\t\n\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t This page was last modified on 27 April 2016, at 23:13.\n\t\t\t\t\t\t\t\t\tThis page has been accessed 691 times.\n\t\t\t\t\t\t\t\t\tContent is available under a Creative Commons Attribution-ShareAlike 4.0 International License unless otherwise noted.\n\t\t\t\t\t\t\t\t\tPrivacy policy\n\t\t\t\t\t\t\t\t\tAbout LIMSWiki\n\t\t\t\t\t\t\t\t\tDisclaimers\n\t\t\t\t\t\t\t\n\t\t\n\t\t\n\t\t\n\n","fe175ec1d1846bbf56d90860f4b8b11a_html":"<body class=\"mediawiki ltr sitedir-ltr ns-202 ns-subject page-LII_Medical_Device_Software_Development_with_Continuous_Integration_Validation skin-monobook action-view\">\n<div id=\"rdp-ebb-globalWrapper\">\n\t\t<div id=\"rdp-ebb-column-content\">\n\t\t\t<div id=\"rdp-ebb-content\" class=\"mw-body\" role=\"main\">\n\t\t\t\t<a id=\"rdp-ebb-top\"><\/a>\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t<h1 id=\"rdp-ebb-firstHeading\" class=\"firstHeading\" lang=\"en\">LII:Medical Device Software Development with Continuous Integration\/Validation<\/h1>\n\t\t\t\t\n\t\t\t\t<div id=\"rdp-ebb-bodyContent\" class=\"mw-body-content\">\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\n\n\t\t\t\t\t<!-- start content -->\n\t\t\t\t\t<div id=\"rdp-ebb-mw-content-text\" lang=\"en\" dir=\"ltr\" class=\"mw-content-ltr\"><div align=\"center\">-----Return to <a href=\"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\" title=\"LII:Medical Device Software Development with Continuous Integration\" target=\"_blank\" class=\"wiki-link\" data-key=\"3cb3f79774b24a8afa847a72c56c4835\">the beginning<\/a> of this guide-----<\/div>\n\n\n<h2><span class=\"mw-headline\" id=\"Overview_of_unit_tests\">Overview of unit tests<\/span><\/h2>\n<p>In computer programming, unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure. In object-oriented programming a unit is usually a method. Unit tests are created by programmers or occasionally by white box testers during the development process. In the world of Java, we have a number of popular options for the implementation of unit tests, with JUnit and TestNG being, arguably, the most popular. Examples provided in this article will use TestNG syntax and annotations.\n<\/p><p>Traditionally (and by traditionally, I mean in their relatively brief history), unit tests have been thought of as very simple tests to validate basic inputs and outputs of a software method. While this can be true, and such simple tests can serve of some amount of value, it is possible to achieve much more with unit tests. In fact, it is not only possible but recommended that we implement much of our user acceptance, functional, and possibly even some non-functional tests within a unit test framework.\n<\/p><p>To further enhance quality, we can augment the acceptance with unit tests.<sup id=\"rdp-ebb-cite_ref-LeffingwellAgile11_1_1-0\" class=\"reference\"><a href=\"#cite_note-LeffingwellAgile11_1-1\" rel=\"external_link\">[1]<\/a><\/sup> While I personally have never been a fan of test-driven development (I feel that the assumptions required by test-driven development do not allow for a true iterative approach), I do believe that creation of unit tests in parallel with development leads to much more quality software. In the world of Agile, this means that no functional requirement (or user story) is considered fully implemented without a corresponding unit test. This strict view of unit tests may be a bit extreme, but it is not without merit.\n<\/p><p>The first unit test a developer may ever write is likely so simple that it's nearly useless. It may go something like this.\n<\/p><p>Given a method:\n<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\"><pre>public int doSomething (int a, int b) { \u2026 return c;}<\/pre><\/div>\n<p>A simple unit test may look something like this:\n<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\"><pre>public class MyUnitTests { @Test public void testDoSomething() { assertEquals(doSomething(1, 2), expectedResult); }}<\/pre><\/div>\n<p>Given a very simple method the developer is able to assert that, essentially, a + b = c. This is easy to write, and there is little overheard involved, but it really isn\u2019t a very useful unit test.\n<\/p>\n<h3><span class=\"mw-headline\" id=\"Early_attempts_to_automate_functional_testing\">Early attempts to automate functional testing<\/span><\/h3>\n<p>Long ago I was involved with a project in which management invested a significant amount of time and training in an attempt to implement automated testing. The chosen tool was Rational Robot (now an IBM product). The idea behind tools such as Robot was that a test creator could record test macros, note points of verification, and replay the macros later with test results noted. Tools such as Rational Robot and WinRunner attempted to replace the human tester with record scripts. These automated scripts could be written using a scripting language or, more commonly, by recording mouse movements, clicks, and keyboard actions. In this regard, these tools of test automation allowed black-box testing through a user interface.\n<\/p><p>In this over-simplified view of automated testing, there were simply too many logistical problems with test implementation to make them practical. Any minor changes to the user interface would result in a broken test script. Those responsible for maintaining these automated scripts often found themselves spending more time maintaining the tests than using them for actual application testing.\n<\/p><p>Rational Robot and tools like it are alive and well, but I refer to them in the past tense because such tools, in my experience, have proven themselves to be a failure. I say this because I have personally spent significant amounts of time creating automated scripts in such tools, and I have been frustrated to learn later that they would not be used because of the substantial amount of interface code that changes as a project progresses. Such changes are absolutely expected, and yet, a recorded automated test does not lend itself well to an iterative development environment or an ongoing project.\n<\/p>\n<h3><span class=\"mw-headline\" id=\"Automating_functional_tests_using_unit_test_framework\">Automating functional tests using unit test framework<\/span><\/h3>\n<p>Most software projects, especially in any kind of Agile environment, undergo frequent changes and refactoring. If the traditional single-flow waterfall model worked, recorded test scripts such as those noted previously would probably work just fine as well, albeit with little benefit.\n<\/p><p>But it should be well known by know that the traditional single-flow waterfall model has failed, and we live in an iterative\/Agile world. As such, our automated tests must be equally equipped for ongoing change. And because the functional unit tests are closely related to requirements at both a white-box and black-box level, developers, not testers, have an integral role in the creation of automated tests.\n<\/p><p>To achieve this level of unit testing, a test framework must be in place. This requires a bit of up-front effort, and the details of creating such a framework go well beyond the scope of this article. Additionally, the needs of a test framework will vary depending on the project.\n<\/p><p>Test fixtures become an important part of complex functional unit testing. A test fixture is a class that incorporates all of the setup necessary for running such unit tests. It provides methods that can create common objects (for example, test servers and mock interfaces). The details included in a test fixture are specific to each project, but some common methods include test setup, simulation, and mock object creation and destruction, as well as declaration of any common functionality to be used across unit tests. To provide further detail on test fixture creation would require much more detail than can be provided here.\n<\/p><p>Given what may seem like extreme overhead in the creation of complex unit tests, we may begin to question the value. There is, no doubt, a significant up-front cost to the creation of a versatile and useful unit test framework (including a test fixture, which includes all the necessary objects and setup needed to simulate a running environment for the sake of testing). And given the fact that manual function and user acceptance testing remains a project necessity, it seems like there may be an overlap of effort.\n<\/p><p>But this is not the case.\n<\/p><p>With a little up-front creation of a solid unit test framework, we can make efforts to create unit tests simple. We can even go as far as requiring a unit test for any functional requirement implementation prior to allowing that requirement (or ticket) to be considered complete. Furthermore, as we discover potential functionality problems, we have the opportunity to introduce a new test right then and there!\nThe hardware system, software program, and general quality assurance system controls discussed below are essential in the automated manufacture of medical devices. The systematic validation of software and associated equipment will assure compliance with the QS regulation and reduce confusion, increase employee morale, reduce costs, and improve quality. Further, proper validation will smooth the integration of automated production and quality assurance equipment into manufacturing operations. Medical devices and the manufacturing processes used to produce them vary from the simple to the very complex. Thus, the QS regulation needs to be and is a flexible quality system. This flexibility is increasingly valuable as more device manufacturers move to automated production, test\/inspection, and record-keeping systems.<sup id=\"rdp-ebb-cite_ref-FDAGen02_2-0\" class=\"reference\"><a href=\"#cite_note-FDAGen02-2\" rel=\"external_link\">[2]<\/a><\/sup>\n<\/p>\n<h3><span class=\"mw-headline\" id=\"What_is_a_good_unit_test.3F\">What is a good unit test?<\/span><\/h3>\n<p>In his book <i>Safe and Sound Software<\/i>, Thomas H. Faris describes the unit test as such:\n<\/p>\n<blockquote>Software testing may occur on software modules or units as they are completed. Unit testing is effective for testing units as they are completed, when other units are components have not yet been completed. Testing still remains to be completed to ensure that the application will work as intended when all software units or components are executed together.<sup id=\"rdp-ebb-cite_ref-FarisSafe06_3-0\" class=\"reference\"><a href=\"#cite_note-FarisSafe06-3\" rel=\"external_link\">[3]<\/a><\/sup><\/blockquote>\n<p>This is a start, but unit tests can achieve so much more! Faris goes on to describe a number of different categories of software<sup id=\"rdp-ebb-cite_ref-FarisSafe06_3-1\" class=\"reference\"><a href=\"#cite_note-FarisSafe06-3\" rel=\"external_link\">[3]<\/a><\/sup>:\n<\/p>\n<ul><li> Black box test<\/li>\n<li> Unit test<\/li>\n<li> Integration test<\/li>\n<li> System test<\/li>\n<li> Load test<\/li>\n<li> Regression test<\/li>\n<li> Requirements-based test<\/li>\n<li> Code-based test<\/li>\n<li> Risk-based test<\/li>\n<li> Clinical test<\/li><\/ul>\n<p>Traditionally this may be considered a fair list. Used wisely, and with the proper frame work, however, we can perform black box, integration, system, load, regression, requirements, code-based, risk-based, and clinical tests with efficient unit tests that simulate a true production environment. The purpose of this article is not to go into the technical details of how (to explain unit test frameworks, fixtures, mock objects and simulations would require much more space). Rather, I simply want to point out the benefits that result. To achieve these benefits, your software team will need to develop a deep understand of unit tests. It will take some time, but it will be time very well spent.\n<\/p><p>It\u2019s a good idea to have unit tests that go above and beyond what we traditionally think of as unit tests, and go several steps further, automating functional testing. This is another one of those areas where team members often (incorrectly) feel that there is not sufficient time to do all the work. As Harris goes on to state:\n<\/p>\n<blockquote>Software testing and defect resolution are very time-consuming, often draining more than one-half of all effort undertaken by a software organization ... Testing need not wait until the entire product is completed; iteratively designed and developed code may be tested as each iteration of code is completed. Prior to beginning of verification or validation, the project plan or other test plan document should discuss the overall strategy, including types of tests to be performed, specific functional tests to be performed, and a designation of test objectives to determine when the product is sufficiently prepared for release and distribution.<sup id=\"rdp-ebb-cite_ref-FarisSafe06_3-2\" class=\"reference\"><a href=\"#cite_note-FarisSafe06-3\" rel=\"external_link\">[3]<\/a><\/sup><\/blockquote>\n<p>Harris is touching on something that is very important in our FDA-regulated environment, and this is the fact that we must document and describe our tests. For our unit tests to be useful we must provide documentation of what each test does (that is, what specifically it is testing) and what the results are. The beauty of unit tests and the tools available (incorporation into our continuous integration environment) is that this process is streamlined in a way that makes the traceability and re-creation of test conditions required for our 510(k) extremely easy!\n<\/p><p>To achieve all of this we will need to have a testing framework capable of application launch, simulations, mock objects, mock interfaces and temporary data persistence. This all sounds like much more overhead than it actually is, but fear not: the benefits far outweigh the costs.\n<\/p>\n<h2><span class=\"mw-headline\" id=\"What_is_the_value_of_unit_testing.3F\">What is the value of unit testing?<\/span><\/h2>\n<h3><span class=\"mw-headline\" id=\"Immediate_feedback_within_continuous_integration:_Developer_confidence\">Immediate feedback within continuous integration: Developer confidence<\/span><\/h3>\n<p>Too often we view testing as an activity that occurs only at specific times during software development. At worst, software testing takes place upon completion of development (which is when it is learned that development is nowhere near complete). In other more zealous environments, it may take place at the end of each iteration. We can do better! How about complex unit tests performing validation continuously, with each code change? It is possible to perform full regression tests with every single code change. It sounds like a significant amount of overhead, but it is not. The real cost to a project is not inattention to complex functional unit tests; the danger is that we put off testing until it is too late to react to a critical issue discovered during some predetermined testing phase.\n<\/p><p>The most effective way of killing a project is to organize it so that testing becomes an activity that is so critical to its success that we do not allow for the possibility that testing can do what it is supposed to do: discover a defect prior to go-live.\n<\/p><p>At its most basic level, a continuous integration build environment does just one thing: it runs whatever scripts we tell it to. To that end, it is important that the CI build execute unit tests and that a failure of any single unit test is considered a failure of the continuous integration build. The power of a tool such as Jenkins is that we can tell it to run whatever we want, log the outcome, keep build artifacts, run third-party evaluation tools, and report on results. With integration of our software version control system (e.g., Subversion, Git, Mercurial, CVS, etc.), we know the changeset relevant to a particular build. It can be configured to generate a build at whatever interval we want (nightly, hourly, every time there is a code commit, etc.). When a test fails, we know immediately what changeset was involved.\n<\/p><p>Personally, every time I do any code commit of significance, one of the first things I do is check the CI build for success. If I've broken the build, I get to work on correcting the problem (and if I cannot correct the problem quickly, I roll my changeset out so that the CI build continues to work until I\u2019ve fixed the issue).\n<\/p>\n<h4><span class=\"mw-headline\" id=\"Easy_refactoring\">Easy refactoring<\/span><\/h4>\n<p>As a developer, refactoring can be a scary thing. Refactoring is perhaps the most effective way of introducing a serious defect while doing something that seems innocuous. With thorough unit tests performing a full regression test with each and every committed software changeset, however, a developer can have confidence that his or her simple code changes have not introduced a defect. We have continuous integration builds running our tests for many reasons, not the least of which is to alert developers to the possibility that their changes have broken the build.\n<\/p><p>As a developer I strive to avoid breaking the continuous integration build. When I do break it, however, I am very pleased to know that what was done to cause a problem has been discovered immediately. Correction of a defect becomes much more costly when its discovery is not noticed until the end of a development phase!\n<\/p>\n<h4><span class=\"mw-headline\" id=\"Regression_tests_with_every_code_change\">Regression tests with every code change<\/span><\/h4>\n<p>By \"repeated\" I mean something different than repeatable. The fundamental benefit with repeated tests is the fact that a test can be executed many more times by automation than by a human tester. Sometimes, even without a related code change, and much to our surprise, we see a test suddenly fail where it succeeded numerous times before. What happened?\n<\/p><p>The most difficult software defects to fix (much less, find) are the ones that do not happen consistently. Database locking issues, memory issues, deadlock bugs, memory leaks, and race conditions can result in such defects. These defects are serious, but if we never detect them, how can we fix them?\n<\/p><p>As stated previously, it is imperative that have unit tests that go above and beyond what we traditionally think of as unit tests, going several steps further, automating functional testing). This is another one of those areas where team members often (incorrectly) feel that there is not sufficient time to deal with the creation of unit tests. Given a proper framework, however, creation of unit tests need not be overwhelming.\n<\/p><p>Another occasional issue has to do with misuse of the software version control system. Many developers know the frustration that can come with an accidental code change resulting from one developer stepping over the modifications of another. While this is a rare issue in a properly used version control environment, it does still happen, and unit tests can quickly reveal such a problem at build time.\n<\/p>\n<h4><span class=\"mw-headline\" id=\"Concurrency_tests\">Concurrency tests<\/span><\/h4>\n<p>Concurrency tests are tricky, and it is in concurrency testing that the repeated and rapid nature of functional unit tests can shine where human testers cannot. I personally have witnessed many occasions in which a CI build suddenly fails for no obvious reason. There was no code commit related to the particular point-of-failure, and yet a unit test that once succeeded suddenly fails? Why?\n<\/p><p>This can happen (and it does happen) because concurrency problems, by their very nature, are hit or miss. Sometimes they are so unlikely to occur that we never witness them during the course of normal testing. When a continuous integration environment runs concurrency tests dozens of times a day, however, we increase the likelihood of finding a hidden and menacing problem. Additionally, unit tests can simulate many concurrent users and processes in a way that even a team of human testers cannot.\n<\/p>\n<h4><span class=\"mw-headline\" id=\"Repeatable_and_traceable_test_results\">Repeatable and traceable test results<\/span><\/h4>\n<p>This is the key to making our unit tests adhere to the standards we have set forth in our quality system so that we may use them as a part of our submission (see the following section on Regulated Environment Needs). If we are going to put forth the effort, and since we already know that unit tests result in a quality improvement to our software, why wouldn\u2019t we want to include these test results?\n<\/p><p>Our continuous integration server can and should be used to store our unit test results right alongside each and every build that it performs.\n<\/p><p>This is a benefit not only in the world of an FDA-regulated environment, of course. In any software project it can be difficult to recreate conditions under which a defect was discovered. With a CI build executing our build and test scripts under a known environment with a known set of files (the CI build tool pulls from the version control system), it is possible to execute the tests under exact and specific circumstances.\n<\/p><p>Many of the benefits of functional unit testing listed above are gained only when unit tests are written alongside design and development (test-driven methodologies aside). It is imperative that the development team develop and observe test results while design and activities take place. This is of benefit to the quality assurance team as well, as Dean Leffingwell notes:\n<\/p>\n<blockquote>A comprehensive unit test strategy prevents QA and test personnel from spending most of their time finding and reporting on code-level bugs and allows the team to move its focus to more system-level testing challenges. Indeed, for many agile teams, the addition of a comprehensive unit test strategy is a key pivot point in their move toward true agility \u2014 and one that delivers \"best bang for the buck\" in determining overall system quality.<sup id=\"rdp-ebb-cite_ref-LeffingwellAgile11_2_4-0\" class=\"reference\"><a href=\"#cite_note-LeffingwellAgile11_2-4\" rel=\"external_link\">[4]<\/a><\/sup> Also, it is probably becoming clear that a key benefit of functional unit tests is the real-time feedback offered to the development team. Humble and Farley refer to the unit tests that are executed with each software change as \"commit tests.\"<sup id=\"rdp-ebb-cite_ref-HumbleCont10_5-0\" class=\"reference\"><a href=\"#cite_note-HumbleCont10-5\" rel=\"external_link\">[5]<\/a><\/sup><\/blockquote>\n<p>Commit tests that run against every check-in provide us with timely feedback on problems with the latest build and on bugs in our application in the small.<sup id=\"rdp-ebb-cite_ref-HumbleCont10_5-1\" class=\"reference\"><a href=\"#cite_note-HumbleCont10-5\" rel=\"external_link\">[5]<\/a><\/sup> Project unit tests, which should offer significant amount coverage (at least 80 percent), provide the team with built-in software change-commit acceptance criteria. If a developer causes the CI build to fail because of a code change, it is immediately known that the change involved does not meet minimum accepted criteria, and it requires urgent attention.\n<\/p><p>Humble and Farley continue:\n<\/p>\n<blockquote>Crucially, the development team must respond immediately to accepted test breakages that occur as part of the normal development process. They must decided if the breakage is a result of a regression that has been introduced, an intentional change in the behavior of the application, or a problem with the test. Then they must take appropriate action to get the automated acceptance test suite passing again.<sup id=\"rdp-ebb-cite_ref-HumbleCont10_5-2\" class=\"reference\"><a href=\"#cite_note-HumbleCont10-5\" rel=\"external_link\">[5]<\/a><\/sup><\/blockquote>\n<h4><span class=\"mw-headline\" id=\"Regulated_environment_needs\">Regulated environment needs<\/span><\/h4>\n<p>Per 21 CFR Part 820.30 on design controls:\n<\/p>\n<blockquote>(f) <i>Design verification<\/i>. Each manufacturer shall establish and maintain procedures for verifying the device design. Design verification shall confirm that the design output meets the design input requirements. The results of the design verification, including identification of the design, method(s), the date, and the individual(s) performing the verification, shall be documented in the design history file (DHF).<sup id=\"rdp-ebb-cite_ref-21CFRPart820.30_6-0\" class=\"reference\"><a href=\"#cite_note-21CFRPart820.30-6\" rel=\"external_link\">[6]<\/a><\/sup><\/blockquote>\n<p>Simply put, our functional unit tests must be a part of our DHF, and we must document each test and test result (success or failure) as well as tie tests and outcomes to specific software releases. This is made extremely easy with a continuous integration environment in which builds and build outcomes (including test results) are stored on a server, labeled, and linked to from our DHF. Indeed, what is sometimes a tedious task when it comes to manual execution and documentation of test results becomes quite convenient.\n<\/p><p>The same is true of design validation:\n<\/p>\n<blockquote>(g) <i>Design validation<\/i>. Each manufacturer shall establish and maintain procedures for validating the device design. Design validation shall be performed under defined operating conditions on initial production units, lots, or batches, or their equivalents. Design validation shall ensure that devices conform to defined user needs and intended uses and shall include testing of production units under actual or simulated use conditions. Design validation shall include software validation and risk analysis, where appropriate. The results of the design validation, including identification of the design, method(s), the date, and the individual(s) performing the validation, shall be documented in the DHF.<sup id=\"rdp-ebb-cite_ref-21CFRPart820.30_6-1\" class=\"reference\"><a href=\"#cite_note-21CFRPart820.30-6\" rel=\"external_link\">[6]<\/a><\/sup><\/blockquote>\n<p>Because our CI environment packages build and test conditions at a given point in time, we can successfully satisfy the requirements laid out by 21 CFR Part 820.30 (f) and (g) with very little effort. We simply allow our CI environment to do that which is does best, and that which a human tester may spend many hours attempting to do with accuracy.\n<\/p>\n<h4><span class=\"mw-headline\" id=\"Document_the_approach\">Document the approach<\/span><\/h4>\n<p>As discussed, all these tests are indeed very helpful to the creation of good software. However, without a wise approach to incorporation of such tests in our FDA-regulated environment, they are of little use in any auditable capacity. It is necessary to document our approach to unit test usage and documentation within our standard operating procedures (SOPs) and work instructions, and this is to be documented in much the same way that we would document any manual verification and validation test activities.\n<\/p><p>To this end, it is necessary to make our unit tests and their outputs an integral part of our DHF. Each test must be traceable, and this means that unit tests are given unique identifiers. These unique identifiers are very easily assigned using an approach in which we organize tests in logical units (for example, by functional area) and label tests sequentially.\n<\/p>\n<h4><span class=\"mw-headline\" id=\"Label_and_trace_tests\">Label and trace tests<\/span><\/h4>\n<p>An approach that I have taken in the past is to assign some high-level numeric identifier and a secondary sub-identifier that is used for the specific test. For example, we may have the following functional areas: user session, audit log, data input, data output, and web user interface tests (these are very generic examples of functional areas, granted). Given such functional areas, I would label each test using test naming annotations, with the following high level identifiers:\n<\/p>\n<ul><li> 1000: user session tests<\/li>\n<li> 2000: audit log tests<\/li>\n<li> 3000: data input tests<\/li>\n<li> 4000: data output tests<\/li>\n<li> 5000: web user interface tests<\/li><\/ul>\n<p>Within each test it is then necessary to go a step further, applying some sequential identifier to each test. For example, the user test package may include tests for functional requirements such as user login, user logout, session expiration, and a multiple-user login concurrency test. In such a scenario, we would label the tests as follows:\n<\/p>\n<ul><li> 1000_010: user login<\/li>\n<li> 1000_020: user logout<\/li>\n<li> 1000_030: session expiration<\/li>\n<li> 1000_040: multiple concurrent user login<\/li><\/ul>\n<p>Using TestNG syntax, along with proper Javadoc comments, it is very easy to label and describe a test such that inclusion in our DHF is indeed very simple.\n<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\"><pre>\/** * Test basic user login and session creation with a valid user. * * @throws Exception *\/@Test(dependsOnMethods = {\"testActivePatientIntegrationDisabled\"}, groups = {\"TS0005_AUTO_VE1023\"})public void testActivePatientIntegrationEnabled() throws Exception { Fixture myApp new Fixture(); UserSession mySession = fixture.login(\u201ctest_user\u201d, \u201ctest_password\u201d); assertNotNull(mySession); asertTrue(mySession.active());}<\/pre><\/div>\n<p>Any numbering we choose to use for these tests is fine, as long as we document our approach to test labeling in some project level document, for example a validation plan or master test plan. Such decisions are left to those who design and apply a quality system for the FDA-regulated project. As most of us know by now, the FDA doesn\u2019t tell us exactly how we are to do things; rather, we are simply told that we must create a good quality system, trace our requirements through design, incorporate the history in our DHF, and recreate build and test conditions.\n<\/p><p>If I make this all sound a little too easy, it is because I believe it is easy. Too often we view cGMP guidance as a terrible hindrance to productivity, but we are in control of making things as efficient as we can.\n<\/p>\n<h2><span class=\"mw-headline\" id=\"The_traceability_matrix\">The traceability matrix<\/span><\/h2>\n<p>A critical factor in making unit tests usable in an auditable manner is incorporating them into the traceability matrix. As with any test, requirements, design elements, and hazards must be traced to one another through use of the traceability matrix.\n<\/p>\n<blockquote>The project team must document traceability of requirements through specification and testing to ensure that all requirements have been tested and correctly implemented (product requirements traceability matrix).<sup id=\"rdp-ebb-cite_ref-FarisSafe06_3-3\" class=\"reference\"><a href=\"#cite_note-FarisSafe06-3\" rel=\"external_link\">[3]<\/a><\/sup><\/blockquote>\n<p>With each automated test labeled, we can use the built-in JUnit or TestNG funcationality (along with XSLT, if we so choose) to create output that is tied to the build number and changset and traceable within our trace matrix. The output of our tests (which are run during each continuous integration build) may be as follows:\n<\/p>\n<div class=\"mw-highlight mw-content-ltr\" dir=\"ltr\"><pre>TEST NAME STATUSTS0005_AUTO_VE1022 PASSTS0005_AUTO_VE1023 PASS TS0005_AUTO_VE1024 FAILTS0005_AUTO_VE1025 SKIP\u2026<\/pre><\/div>\n<p>Naturally, we hope that all the automated tests pass, but when they fail we need to record the failure. Its my opinion that placing all test outcomes in the DHF is not necessary. Rather, the DHF can point to the continuous integration build server, where automated test results are bundled alongside each build. Finally, at the end of a sprint or iteration, the appropriate test results for the final locked down build are captured in the DHF and traced appropriately per SOPs.\n<\/p><p>Our SOPs and work instructions will require that we prove traceability of our tests and test results, whether manual or automated unit tests. Just as has always been done with the manual tests that we are familiar with, tests must be traced to software requirements, design specifications, hazards, and risks. The goal is simply to prove that we have tested that which we have designed and implemented, and in the case of automated tests, this is all very easy to achieve!\n<\/p>\n<h3><span class=\"mw-headline\" id=\"Do_we_still_need_manual_tests.3F\">Do we still need manual tests?<\/span><\/h3>\n<p>Yes! Absolutely! There are a number of reasons why manual tests are still, and always will be, required. Take for example installation qualification and environmental tests. Both manual and automated tests are valid and valuable, and neither should be considered a replacement for the other.\n<\/p><p>I recall being a child in karate lessons. One day I came home from a lesson, very proud that I had learned to block a punch. \"Come at me with a punch,\" I said to my friend.\n<\/p><p>Doing what I asked, he punched me right in the chest, and I failed to block the punch. This punch wasn't thrown the way I expected (the way we practiced in karate lessons).\n<\/p><p>\"No, no, no! You\u2019re punching me the wrong way!\" I said. I only knew how to block one kind of punch, and when punched a different way, my block no longer worked. To me, this karate lesson highlights the difference between an exception and an error. Automated tests can provide error test coverage very well. But when thrown something unanticipated, they don\u2019t offer the creativity in and of themselves to find the issue.\n<\/p><p>It is up to us, developers and testers, to come up with creative punches to throw at our system. This is where manual testing allows a certain amount of \"creative\" punching that may not be considered during unit test development. Manual tests also lead to greater insight related to usability and user interaction issues.\n<\/p><p>Perhaps even more importantly, manual tests give feedback on general application usability and user interaction. To this end, a defect that is discovered during manual testing should result in an automated test.\n<\/p>\n<h2><span class=\"mw-headline\" id=\"Notes\">Notes<\/span><\/h2>\n<p>The original author had anticipated writing about the following sub-topics but never did: test fixture, mock objects, avoiding the Singleton design pattern, in-memory DB, and in-memory servlet container.\n<\/p>\n<h2><span class=\"mw-headline\" id=\"References\">References<\/span><\/h2>\n<div class=\"reflist\" style=\"list-style-type: decimal;\">\n<ol class=\"references\">\n<li id=\"cite_note-LeffingwellAgile11_1-1\"><span class=\"mw-cite-backlink\"><a href=\"#cite_ref-LeffingwellAgile11_1_1-0\" rel=\"external_link\">\u2191<\/a><\/span> <span class=\"reference-text\"><span class=\"citation book\">Leffingwell, D.&#32;(2011).&#32;<i>Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise<\/i>.&#32;Addison-Wesley Professional.&#32;p.&#160;61.&#32;<a rel=\"external_link\" class=\"external text\" href=\"http:\/\/en.wikipedia.org\/wiki\/International_Standard_Book_Number\" target=\"_blank\">ISBN<\/a>&#160;9780321635846.<\/span><span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=book&amp;rft.btitle=Agile+Software+Requirements%3A+Lean+Requirements+Practices+for+Teams%2C+Programs%2C+and+the+Enterprise&amp;rft.aulast=Leffingwell%2C+D.&amp;rft.au=Leffingwell%2C+D.&amp;rft.date=2011&amp;rft.pages=p.%26nbsp%3B61&amp;rft.pub=Addison-Wesley+Professional&amp;rft.isbn=9780321635846&amp;rfr_id=info:sid\/en.wikipedia.org:LII:Medical_Device_Software_Development_with_Continuous_Integration\/Validation\"><span style=\"display: none;\">&#160;<\/span><\/span><\/span>\n<\/li>\n<li id=\"cite_note-FDAGen02-2\"><span class=\"mw-cite-backlink\"><a href=\"#cite_ref-FDAGen02_2-0\" rel=\"external_link\">\u2191<\/a><\/span> <span class=\"reference-text\"><span class=\"citation web\"><a rel=\"external_link\" class=\"external text\" href=\"http:\/\/www.fda.gov\/MedicalDevices\/DeviceRegulationandGuidance\/GuidanceDocuments\/ucm085281.htm\" target=\"_blank\">\"General Principles of Software Validation; Final Guidance for Industry and FDA Staff\"<\/a>.&#32;Food and Drug Administration.&#32;11 January 2002<span class=\"printonly\">.&#32;<a rel=\"external_link\" class=\"external free\" href=\"http:\/\/www.fda.gov\/MedicalDevices\/DeviceRegulationandGuidance\/GuidanceDocuments\/ucm085281.htm\" target=\"_blank\">http:\/\/www.fda.gov\/MedicalDevices\/DeviceRegulationandGuidance\/GuidanceDocuments\/ucm085281.htm<\/a><\/span><span class=\"reference-accessdate\">.&#32;Retrieved 27 April 2016<\/span>.<\/span><span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=bookitem&amp;rft.btitle=General+Principles+of+Software+Validation%3B+Final+Guidance+for+Industry+and+FDA+Staff&amp;rft.atitle=&amp;rft.date=11+January+2002&amp;rft.pub=Food+and+Drug+Administration&amp;rft_id=http%3A%2F%2Fwww.fda.gov%2FMedicalDevices%2FDeviceRegulationandGuidance%2FGuidanceDocuments%2Fucm085281.htm&amp;rfr_id=info:sid\/en.wikipedia.org:LII:Medical_Device_Software_Development_with_Continuous_Integration\/Validation\"><span style=\"display: none;\">&#160;<\/span><\/span><\/span>\n<\/li>\n<li id=\"cite_note-FarisSafe06-3\"><span class=\"mw-cite-backlink\">\u2191 <sup><a href=\"#cite_ref-FarisSafe06_3-0\" rel=\"external_link\">3.0<\/a><\/sup> <sup><a href=\"#cite_ref-FarisSafe06_3-1\" rel=\"external_link\">3.1<\/a><\/sup> <sup><a href=\"#cite_ref-FarisSafe06_3-2\" rel=\"external_link\">3.2<\/a><\/sup> <sup><a href=\"#cite_ref-FarisSafe06_3-3\" rel=\"external_link\">3.3<\/a><\/sup><\/span> <span class=\"reference-text\"><span class=\"citation book\">Faris, T.H.&#32;(2006).&#32;<i>Safe and Sound Software: Creating an Efficient and Effective Quality System for Software Medical Device Organizations<\/i>.&#32;ASQ Quality Press.&#32;p.&#160;118\u2013123.&#32;<a rel=\"external_link\" class=\"external text\" href=\"http:\/\/en.wikipedia.org\/wiki\/International_Standard_Book_Number\" target=\"_blank\">ISBN<\/a>&#160;0873896742.<\/span><span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=book&amp;rft.btitle=Safe+and+Sound+Software%3A+Creating+an+Efficient+and+Effective+Quality+System+for+Software+Medical+Device+Organizations&amp;rft.aulast=Faris%2C+T.H.&amp;rft.au=Faris%2C+T.H.&amp;rft.date=2006&amp;rft.pages=p.%26nbsp%3B118%E2%80%93123&amp;rft.pub=ASQ+Quality+Press&amp;rft.isbn=0873896742&amp;rfr_id=info:sid\/en.wikipedia.org:LII:Medical_Device_Software_Development_with_Continuous_Integration\/Validation\"><span style=\"display: none;\">&#160;<\/span><\/span><\/span>\n<\/li>\n<li id=\"cite_note-LeffingwellAgile11_2-4\"><span class=\"mw-cite-backlink\"><a href=\"#cite_ref-LeffingwellAgile11_2_4-0\" rel=\"external_link\">\u2191<\/a><\/span> <span class=\"reference-text\"><span class=\"citation book\">Leffingwell, D.&#32;(2011).&#32;<i>Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise<\/i>.&#32;Addison-Wesley Professional.&#32;p.&#160;196.&#32;<a rel=\"external_link\" class=\"external text\" href=\"http:\/\/en.wikipedia.org\/wiki\/International_Standard_Book_Number\" target=\"_blank\">ISBN<\/a>&#160;9780321635846.<\/span><span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=book&amp;rft.btitle=Agile+Software+Requirements%3A+Lean+Requirements+Practices+for+Teams%2C+Programs%2C+and+the+Enterprise&amp;rft.aulast=Leffingwell%2C+D.&amp;rft.au=Leffingwell%2C+D.&amp;rft.date=2011&amp;rft.pages=p.%26nbsp%3B196&amp;rft.pub=Addison-Wesley+Professional&amp;rft.isbn=9780321635846&amp;rfr_id=info:sid\/en.wikipedia.org:LII:Medical_Device_Software_Development_with_Continuous_Integration\/Validation\"><span style=\"display: none;\">&#160;<\/span><\/span><\/span>\n<\/li>\n<li id=\"cite_note-HumbleCont10-5\"><span class=\"mw-cite-backlink\">\u2191 <sup><a href=\"#cite_ref-HumbleCont10_5-0\" rel=\"external_link\">5.0<\/a><\/sup> <sup><a href=\"#cite_ref-HumbleCont10_5-1\" rel=\"external_link\">5.1<\/a><\/sup> <sup><a href=\"#cite_ref-HumbleCont10_5-2\" rel=\"external_link\">5.2<\/a><\/sup><\/span> <span class=\"reference-text\"><span class=\"citation book\">Humble, J.; Farley, D.&#32;(2010).&#32;<i>Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation<\/i>.&#32;Addison-Wesley Professional.&#32;p.&#160;124.&#32;<a rel=\"external_link\" class=\"external text\" href=\"http:\/\/en.wikipedia.org\/wiki\/International_Standard_Book_Number\" target=\"_blank\">ISBN<\/a>&#160;9780321601912.<\/span><span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=book&amp;rft.btitle=Continuous+Delivery%3A+Reliable+Software+Releases+through+Build%2C+Test%2C+and+Deployment+Automation&amp;rft.aulast=Humble%2C+J.%3B+Farley%2C+D.&amp;rft.au=Humble%2C+J.%3B+Farley%2C+D.&amp;rft.date=2010&amp;rft.pages=p.%26nbsp%3B124&amp;rft.pub=Addison-Wesley+Professional&amp;rft.isbn=9780321601912&amp;rfr_id=info:sid\/en.wikipedia.org:LII:Medical_Device_Software_Development_with_Continuous_Integration\/Validation\"><span style=\"display: none;\">&#160;<\/span><\/span><\/span>\n<\/li>\n<li id=\"cite_note-21CFRPart820.30-6\"><span class=\"mw-cite-backlink\">\u2191 <sup><a href=\"#cite_ref-21CFRPart820.30_6-0\" rel=\"external_link\">6.0<\/a><\/sup> <sup><a href=\"#cite_ref-21CFRPart820.30_6-1\" rel=\"external_link\">6.1<\/a><\/sup><\/span> <span class=\"reference-text\"><span class=\"citation web\"><a rel=\"external_link\" class=\"external text\" href=\"https:\/\/www.accessdata.fda.gov\/scripts\/cdrh\/cfdocs\/cfCFR\/CFRSearch.cfm?fr=820.30\" target=\"_blank\">\"Title 21--Food and Drugs, Part 820--Quality System Regulation, Sec. 820.30 Design controls\"<\/a>.&#32;<i>CFR - Code of Federal Regulations Title 21<\/i>.&#32;Food and Drug Administration.&#32;21 August 2015<span class=\"printonly\">.&#32;<a rel=\"external_link\" class=\"external free\" href=\"https:\/\/www.accessdata.fda.gov\/scripts\/cdrh\/cfdocs\/cfCFR\/CFRSearch.cfm?fr=820.30\" target=\"_blank\">https:\/\/www.accessdata.fda.gov\/scripts\/cdrh\/cfdocs\/cfCFR\/CFRSearch.cfm?fr=820.30<\/a><\/span><span class=\"reference-accessdate\">.&#32;Retrieved 27 April 2016<\/span>.<\/span><span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=bookitem&amp;rft.btitle=Title+21--Food+and+Drugs%2C+Part+820--Quality+System+Regulation%2C+Sec.+820.30+Design+controls&amp;rft.atitle=CFR+-+Code+of+Federal+Regulations+Title+21&amp;rft.date=21+August+2015&amp;rft.pub=Food+and+Drug+Administration&amp;rft_id=https%3A%2F%2Fwww.accessdata.fda.gov%2Fscripts%2Fcdrh%2Fcfdocs%2FcfCFR%2FCFRSearch.cfm%3Ffr%3D820.30&amp;rfr_id=info:sid\/en.wikipedia.org:LII:Medical_Device_Software_Development_with_Continuous_Integration\/Validation\"><span style=\"display: none;\">&#160;<\/span><\/span><\/span>\n<\/li>\n<\/ol><\/div>\n\n<!-- \nNewPP limit report\nCached time: 20190104224854\nCache expiry: 86400\nDynamic content: false\nCPU time usage: 0.195 seconds\nReal time usage: 0.236 seconds\nPreprocessor visited node count: 4042\/1000000\nPreprocessor generated node count: 16965\/1000000\nPost\u2010expand include size: 25909\/2097152 bytes\nTemplate argument size: 8828\/2097152 bytes\nHighest expansion depth: 13\/40\nExpensive parser function count: 0\/100\n-->\n\n<!-- \nTransclusion expansion time report (%,ms,calls,template)\n100.00% 166.874 1 - -total\n100.00% 166.874 1 - Template:Reflist\n 78.42% 130.861 6 - Template:Citation\/core\n 64.89% 108.290 4 - Template:Cite_book\n 22.18% 37.016 2 - Template:Cite_web\n 7.40% 12.349 4 - Template:Citation\/identifier\n 4.23% 7.058 7 - Template:Citation\/make_link\n 2.08% 3.477 8 - Template:Hide_in_print\n 1.92% 3.203 4 - Template:Only_in_print\n-->\n\n<!-- Saved in parser cache with key limswiki:pcache:idhash:8686-0!*!0!!en!*!* and timestamp 20190104224854 and revision id 25245\n -->\n<\/div><div class=\"printfooter\">Source: <a rel=\"external_link\" class=\"external\" href=\"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Validation\">https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Validation<\/a><\/div>\n\t\t\t\t\t\t\t\t\t\t<!-- end content -->\n\t\t\t\t\t\t\t\t\t\t<div class=\"visualClear\"><\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t\t<!-- end of the left (by default at least) column -->\n\t\t<div class=\"visualClear\"><\/div>\n\t\t\t\t\t\n\t\t<\/div>\n\t\t\n\n<\/body>","fe175ec1d1846bbf56d90860f4b8b11a_images":[],"fe175ec1d1846bbf56d90860f4b8b11a_timestamp":1546642134,"216ba38dff8063ab9365f6477c058f2c_type":"article","216ba38dff8063ab9365f6477c058f2c_title":"Version control","216ba38dff8063ab9365f6477c058f2c_url":"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Version_Control","216ba38dff8063ab9365f6477c058f2c_plaintext":"\n\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\n\t\t\t\tLII:Medical Device Software Development with Continuous Integration\/Version Control\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\tFrom LIMSWiki\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\tJump to: navigation, search\n\n\t\t\t\t\t\n\t\t\t\t\t-----Return to the beginning of this guide-----\nVersion control \nThe earliest phase of any software project is the planning phase. At this stage, people involved with the project have meetings and discuss some very high level needs. There are probably some presentations and documents that are created. Project management plans have not been developed, but they should be thought about. And as I stated previously, we begin creation of our work instructions (the application of our SOPs) in this stage.\nThe design history file (DHF) of a project must contain all of the historical \"stuff\" that goes into the project, so even at this early stage it is necessary to decide on a version control system and create a repository. Sure, there may be no tracing involved yet, but this early \"stuff\" should still be kept around In the DHF. Because the earliest phases of the software project result in outputs that are to be included in the DHF, it is necessary determine the version control tool and establish the version control repository early on (for the sake of this article, I assume a basic understanding of version control\/revision control systems).\n\nProject traceability \nTracing is everything, and Subversion, with its changesets, lends itself extremely well to integration with other tools used throughout the project. When used with your issue tracking software, every issue can be linked directly with a set of items in the repository that are related to addressing and resolving that issue. With a click of the mouse we can see a list of all the project file modifications related to a single issue.\nI recommend using a single version control system and repository for all of the \"stuff\" that goes into a project (there are many good reasons to use a repository-per-project approach as well). This means that project management plans, documents, presentations, code, test data, and results should all go into the same repository, and the repository itself is laid out so that each project has its own trunk, tags, and branches. If documents are stored in a separate repository (or in a different version control system altogether) and software code is stored in a different repository, we lose much of the project traceability that we could have.\n(Note: When placing binaries in a version control system, there is no merge path as with text file source code. This means it is generally a good practice for team members, when editing documents, to place a strict lock on the file while editing. This can be done in Subversion.[1] Strict file locking allows others to be notified that another user is currently working on a file.)\nWhile a clear benefit of this approach is the fact that all of the \"stuff\" of a project is associated with the same repository, some may view this as a problem with the setup. I suggest this setup only because I am thinking specifically in terms of an FDA-regulated software product in which it is beneficial to relate all elements of the project in a single traceable repository. In this setup documentation will be versioned (and tagged) along with project source code, and this may or may not be desirable depending on project needs.\nSubversion is superior to many of its predecessors because of (among other things) its introduction to \"changesets.\" A changeset provides a snapshot in time of the entire project repository. When documents, presentations, or source code are changed and committed to the repository, a new changeset number is created. Now, at any time in the future, we can checkout all items in the repository tree as of that changeset. When asked what version of a product something was changed in, we can pull everything relevant to the project at the point of that change. No longer do we have to tag or label our repository in order to revisit a particular instance in time (although Subversion still allows tagging). Every single commit to the repository effectively results in a \"tag.\"\nThis is not to say that tagging is no longer useful. On the contrary, it remains very useful. All software releases, included internal releases, should be tagged (and our work instructions should tell us when and how to perform tagging).\nAnother advantage of Subversion is that, unlike some of its predecessors, it allows for the control and history of directories as well as files (including file and directory name changes). The most commonly used predecessor to Subversion, CVS, did not maintain a version history of a file or directory that was renamed. Subversion can handle the renaming of any version-controlled object.\n\nReferences \n\n\n\u2191 K\u00fcng, S.; Onken, L.; Large, S.&#32;(20 August 2015).&#32;\"Chapter 4. Daily Use Guide: Locking\".&#32;TortoiseSVN: A Subversion client for Windows.&#32;https:\/\/tortoisesvn.net\/docs\/nightly\/TortoiseSVN_en\/tsvn-dug-locking.html .&#32;Retrieved 27 April 2016 . &#160; \n\n\n\n\n\n\n\n\nSource: <a rel=\"external_link\" class=\"external\" href=\"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Version_Control\">https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Version_Control<\/a>\n\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\tNavigation menu\n\t\t\t\t\t\n\t\t\tViews\n\n\t\t\t\n\t\t\t\t\n\t\t\t\tLII\n\t\t\t\tDiscussion\n\t\t\t\tView source\n\t\t\t\tHistory\n\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\n\t\t\t\t\n\t\t\t\tPersonal tools\n\n\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t\t\tLog in\n\t\t\t\t\t\t\t\t\t\t\t\t\tRequest account\n\t\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\t\n\t\tNavigation\n\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\tMain page\n\t\t\t\t\t\t\t\t\t\t\tRecent changes\n\t\t\t\t\t\t\t\t\t\t\tRandom page\n\t\t\t\t\t\t\t\t\t\t\tHelp\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\n\t\t\t\n\t\t\tSearch\n\n\t\t\t\n\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t&#160;\n\t\t\t\t\t\t\n\t\t\t\t\n\n\t\t\t\t\t\t\t\n\t\t\n\t\t\t\n\t\t\tTools\n\n\t\t\t\n\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\tWhat links here\n\t\t\t\t\t\t\t\t\t\t\tRelated changes\n\t\t\t\t\t\t\t\t\t\t\tSpecial pages\n\t\t\t\t\t\t\t\t\t\t\tPermanent link\n\t\t\t\t\t\t\t\t\t\t\tPage information\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\n\t\t\n\t\tPrint\/export\n\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\tCreate a book\n\t\t\t\t\t\t\t\t\t\t\tDownload as PDF\n\t\t\t\t\t\t\t\t\t\t\tDownload as Plain text\n\t\t\t\t\t\t\t\t\t\t\tPrintable version\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\n\t\t\n\t\tSponsors\n\t\t\n\t\t\t \r\n\n\t\r\n\n\t\r\n\n\t\r\n\n\t\n\t\r\n\n \r\n\n\t\n\t\r\n\n \r\n\n\t\n\t\r\n\n\t\n\t\r\n\n\t\r\n\n\t\r\n\n\t\r\n\t\t\n\t\t\n\t\t\t\n\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t This page was last modified on 27 April 2016, at 21:18.\n\t\t\t\t\t\t\t\t\tThis page has been accessed 331 times.\n\t\t\t\t\t\t\t\t\tContent is available under a Creative Commons Attribution-ShareAlike 4.0 International License unless otherwise noted.\n\t\t\t\t\t\t\t\t\tPrivacy policy\n\t\t\t\t\t\t\t\t\tAbout LIMSWiki\n\t\t\t\t\t\t\t\t\tDisclaimers\n\t\t\t\t\t\t\t\n\t\t\n\t\t\n\t\t\n\n","216ba38dff8063ab9365f6477c058f2c_html":"<body class=\"mediawiki ltr sitedir-ltr ns-202 ns-subject page-LII_Medical_Device_Software_Development_with_Continuous_Integration_Version_Control skin-monobook action-view\">\n<div id=\"rdp-ebb-globalWrapper\">\n\t\t<div id=\"rdp-ebb-column-content\">\n\t\t\t<div id=\"rdp-ebb-content\" class=\"mw-body\" role=\"main\">\n\t\t\t\t<a id=\"rdp-ebb-top\"><\/a>\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t<h1 id=\"rdp-ebb-firstHeading\" class=\"firstHeading\" lang=\"en\">LII:Medical Device Software Development with Continuous Integration\/Version Control<\/h1>\n\t\t\t\t\n\t\t\t\t<div id=\"rdp-ebb-bodyContent\" class=\"mw-body-content\">\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\n\n\t\t\t\t\t<!-- start content -->\n\t\t\t\t\t<div id=\"rdp-ebb-mw-content-text\" lang=\"en\" dir=\"ltr\" class=\"mw-content-ltr\"><div align=\"center\">-----Return to <a href=\"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\" title=\"LII:Medical Device Software Development with Continuous Integration\" target=\"_blank\" class=\"wiki-link\" data-key=\"3cb3f79774b24a8afa847a72c56c4835\">the beginning<\/a> of this guide-----<\/div>\n<h2><span class=\"mw-headline\" id=\"Version_control\">Version control<\/span><\/h2>\n<p>The earliest phase of any software project is the planning phase. At this stage, people involved with the project have meetings and discuss some very high level needs. There are probably some presentations and documents that are created. Project management plans have not been developed, but they should be thought about. And as I stated previously, we begin creation of our work instructions (the application of our SOPs) in this stage.\n<\/p><p>The design history file (DHF) of a project must contain all of the historical \"stuff\" that goes into the project, so even at this early stage it is necessary to decide on a version control system and create a repository. Sure, there may be no tracing involved yet, but this early \"stuff\" should still be kept around In the DHF. Because the earliest phases of the software project result in outputs that are to be included in the DHF, it is necessary determine the version control tool and establish the version control repository early on (for the sake of this article, I assume a basic understanding of version control\/revision control systems).\n<\/p>\n<h2><span class=\"mw-headline\" id=\"Project_traceability\">Project traceability<\/span><\/h2>\n<p>Tracing is everything, and Subversion, with its changesets, lends itself extremely well to integration with other tools used throughout the project. When used with your issue tracking software, every issue can be linked directly with a set of items in the repository that are related to addressing and resolving that issue. With a click of the mouse we can see a list of all the project file modifications related to a single issue.\n<\/p><p>I recommend using a single version control system and repository for all of the \"stuff\" that goes into a project (there are many good reasons to use a repository-per-project approach as well). This means that project management plans, documents, presentations, code, test data, and results should all go into the same repository, and the repository itself is laid out so that each project has its own trunk, tags, and branches. If documents are stored in a separate repository (or in a different version control system altogether) and software code is stored in a different repository, we lose much of the project traceability that we could have.\n<\/p><p>(Note: When placing binaries in a version control system, there is no merge path as with text file source code. This means it is generally a good practice for team members, when editing documents, to place a strict lock on the file while editing. This can be done in Subversion.<sup id=\"rdp-ebb-cite_ref-K.C3.BCngTort15_1-0\" class=\"reference\"><a href=\"#cite_note-K.C3.BCngTort15-1\" rel=\"external_link\">[1]<\/a><\/sup> Strict file locking allows others to be notified that another user is currently working on a file.)\n<\/p><p>While a clear benefit of this approach is the fact that all of the \"stuff\" of a project is associated with the same repository, some may view this as a problem with the setup. I suggest this setup only because I am thinking specifically in terms of an FDA-regulated software product in which it is beneficial to relate all elements of the project in a single traceable repository. In this setup documentation will be versioned (and tagged) along with project source code, and this may or may not be desirable depending on project needs.\n<\/p><p>Subversion is superior to many of its predecessors because of (among other things) its introduction to \"changesets.\" A changeset provides a snapshot in time of the entire project repository. When documents, presentations, or source code are changed and committed to the repository, a new changeset number is created. Now, at any time in the future, we can checkout all items in the repository tree as of that changeset. When asked what version of a product something was changed in, we can pull everything relevant to the project at the point of that change. No longer do we have to tag or label our repository in order to revisit a particular instance in time (although Subversion still allows tagging). Every single commit to the repository effectively results in a \"tag.\"\n<\/p><p>This is not to say that tagging is no longer useful. On the contrary, it remains very useful. All software releases, included internal releases, should be tagged (and our work instructions should tell us when and how to perform tagging).\n<\/p><p>Another advantage of Subversion is that, unlike some of its predecessors, it allows for the control and history of directories as well as files (including file and directory name changes). The most commonly used predecessor to Subversion, CVS, did not maintain a version history of a file or directory that was renamed. Subversion can handle the renaming of any version-controlled object.\n<\/p>\n<h2><span class=\"mw-headline\" id=\"References\">References<\/span><\/h2>\n<div class=\"reflist\" style=\"list-style-type: decimal;\">\n<ol class=\"references\">\n<li id=\"cite_note-K.C3.BCngTort15-1\"><span class=\"mw-cite-backlink\"><a href=\"#cite_ref-K.C3.BCngTort15_1-0\" rel=\"external_link\">\u2191<\/a><\/span> <span class=\"reference-text\"><span class=\"citation web\">K\u00fcng, S.; Onken, L.; Large, S.&#32;(20 August 2015).&#32;<a rel=\"external_link\" class=\"external text\" href=\"https:\/\/tortoisesvn.net\/docs\/nightly\/TortoiseSVN_en\/tsvn-dug-locking.html\" target=\"_blank\">\"Chapter 4. Daily Use Guide: Locking\"<\/a>.&#32;<i>TortoiseSVN: A Subversion client for Windows<\/i><span class=\"printonly\">.&#32;<a rel=\"external_link\" class=\"external free\" href=\"https:\/\/tortoisesvn.net\/docs\/nightly\/TortoiseSVN_en\/tsvn-dug-locking.html\" target=\"_blank\">https:\/\/tortoisesvn.net\/docs\/nightly\/TortoiseSVN_en\/tsvn-dug-locking.html<\/a><\/span><span class=\"reference-accessdate\">.&#32;Retrieved 27 April 2016<\/span>.<\/span><span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=bookitem&amp;rft.btitle=Chapter+4.+Daily+Use+Guide%3A+Locking&amp;rft.atitle=TortoiseSVN%3A+A+Subversion+client+for+Windows&amp;rft.aulast=K%C3%BCng%2C+S.%3B+Onken%2C+L.%3B+Large%2C+S.&amp;rft.au=K%C3%BCng%2C+S.%3B+Onken%2C+L.%3B+Large%2C+S.&amp;rft.date=20+August+2015&amp;rft_id=https%3A%2F%2Ftortoisesvn.net%2Fdocs%2Fnightly%2FTortoiseSVN_en%2Ftsvn-dug-locking.html&amp;rfr_id=info:sid\/en.wikipedia.org:LII:Medical_Device_Software_Development_with_Continuous_Integration\/Version_Control\"><span style=\"display: none;\">&#160;<\/span><\/span><\/span>\n<\/li>\n<\/ol><\/div>\n\n<!-- \nNewPP limit report\nCached time: 20190104224854\nCache expiry: 86400\nDynamic content: false\nCPU time usage: 0.053 seconds\nReal time usage: 0.064 seconds\nPreprocessor visited node count: 663\/1000000\nPreprocessor generated node count: 11529\/1000000\nPost\u2010expand include size: 4933\/2097152 bytes\nTemplate argument size: 1933\/2097152 bytes\nHighest expansion depth: 12\/40\nExpensive parser function count: 0\/100\n-->\n\n<!-- \nTransclusion expansion time report (%,ms,calls,template)\n100.00% 47.486 1 - Template:Reflist\n100.00% 47.486 1 - -total\n 83.68% 39.736 1 - Template:Cite_web\n 72.59% 34.470 1 - Template:Citation\/core\n 7.42% 3.525 2 - Template:Citation\/make_link\n-->\n\n<!-- Saved in parser cache with key limswiki:pcache:idhash:8685-0!*!0!!*!*!* and timestamp 20190104224854 and revision id 25243\n -->\n<\/div><div class=\"printfooter\">Source: <a rel=\"external_link\" class=\"external\" href=\"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Version_Control\">https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Version_Control<\/a><\/div>\n\t\t\t\t\t\t\t\t\t\t<!-- end content -->\n\t\t\t\t\t\t\t\t\t\t<div class=\"visualClear\"><\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t\t<!-- end of the left (by default at least) column -->\n\t\t<div class=\"visualClear\"><\/div>\n\t\t\t\t\t\n\t\t<\/div>\n\t\t\n\n<\/body>","216ba38dff8063ab9365f6477c058f2c_images":[],"216ba38dff8063ab9365f6477c058f2c_timestamp":1546642134,"a2777f409854379de217cacc4dde9d3a_type":"article","a2777f409854379de217cacc4dde9d3a_title":"CI theory, practices, and tools","a2777f409854379de217cacc4dde9d3a_url":"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Continuous_Integration:_Part_2","a2777f409854379de217cacc4dde9d3a_plaintext":"\n\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\n\t\t\t\tLII:Medical Device Software Development with Continuous Integration\/Continuous Integration: Part 2\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\tFrom LIMSWiki\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\tJump to: navigation, search\n\n\t\t\t\t\t\n\t\t\t\t\t-----Return to the beginning of this guide-----\n\r\n\nNote: The following content originates from a separate source. However, it elaborates further on the original author's ideas in the previous section and adds additional information regarding benefits, drawbacks, and tools other than Jenkins, Trac, and Redmine. It has been added and slightly modified under the same license as the rest of the content.\n\nContents\n\n1 Theory \n2 Recommended practices \n\n2.1 Maintain a code repository \n2.2 Automate the build \n2.3 Make the build self-testing \n2.4 Everyone commits to the baseline every day \n2.5 Every commit (to baseline) should be built \n2.6 Keep the build fast \n2.7 Test in a clone of the production environment \n2.8 Make it easy to get the latest deliverables \n2.9 Everyone can see the results of the latest build \n2.10 Automate deployment \n\n\n3 Advantages and disadvantages \n\n3.1 Advantages \n3.2 Disadvantages \n\n\n4 Software \n5 Further reading \n6 References \n7 External links \n\n\n\nTheory \nWhen embarking on a change, a developer takes a copy of the current code base on which to work. As other developers submit changed code to the code repository, this copy gradually ceases to reflect the repository code. When developers submit code to the repository, they must first update their code to reflect the changes in the repository since they took their copy. The more changes the repository contains, the more work developers must do before submitting their own changes.\nEventually, the repository may become so different from the developers' baselines that they enter what is sometimes called \"integration hell\",[1] where the time it takes to integrate exceeds the time it took to make their original changes. In a worst-case scenario, developers may have to discard their changes and completely redo the work.\nContinuous integration involves integrating early and often, so as to avoid the pitfalls of \"integration hell.\" The practice aims to reduce rework and thus reduce cost and time, particularly when automated as a best practice.[2][3]\n\nRecommended practices \nContinuous integration should occur frequently enough that no intervening window remains between commit and build, and such that no errors can arise without developers noticing them and correcting them immediately.[4] Normal practice is to trigger these builds by every commit to a repository, rather than a periodically scheduled build. The practicalities of doing this in a multi-developer environment of rapid commits are such that it's usual to trigger a short timer after each commit, then to start a build when either this timer expires, or after a rather longer interval since the last build. Automated tools such as CruiseControl or Jenkins offer this scheduling automatically.\nAnother factor is the need for a version control system that supports atomic commits, i.e. all of a developer's changes may be seen as a single commit operation. There is no point in trying to build from only half of the changed files.\n\nMaintain a code repository \nThis practice advocates the use of a revision control system for the project's source code. All artifacts required to build the project should be placed in the repository. In this practice and in the revision control community, the convention is that the system should be buildable from a fresh checkout and not require additional dependencies. Extreme Programming advocate Martin Fowler also mentions that where branching is supported by tools, its use should be minimized.[4] Instead, integrating changes is preferred rather than creating multiple versions of the software that are maintained simultaneously. The mainline (or trunk) should be the place for the working version of the software.\n\nAutomate the build \nA single command should have the capability of building the system. Many build-tools, such as make, have existed for years. Other more recent tools like Ant, Maven, MSBuild or IBM Rational Build Forge are frequently used in continuous integration environments. Automation of the build should include automating the integration, which often includes deployment into a production-like environment. In many cases, the build script not only compiles binaries, but also generates documentation, website pages, statistics, and distribution media (such as Windows MSI files, RPM or DEB files).\n\nMake the build self-testing \nOnce the code is built, all tests should run to confirm that it behaves as the developers expect it to behave.\n\nEveryone commits to the baseline every day \nBy committing regularly, every committer can reduce the number of conflicting changes. Checking in a week's worth of work runs the risk of conflicting with other features and can be very difficult to resolve. Early, small conflicts in an area of the system cause team members to communicate about the change they are making.\nMany programmers recommend committing all changes at least once a day (once per feature built), and in addition performing a nightly build.\n\nEvery commit (to baseline) should be built \nThe system should build commits to the current working version in order to verify that they integrate correctly. A common practice is to use automated continuous integration, although this may be done manually. For many, continuous integration is synonymous with using automated continuous integration where a continuous integration server or daemon monitors the version control system for changes, then automatically runs the build process.\n\nKeep the build fast \nThe build needs to complete rapidly, so that if there is a problem with integration, it is quickly identified.\n\nTest in a clone of the production environment \nHaving a test environment can lead to failures in tested systems when they deploy in the production environment, because the production environment may differ from the test environment in a significant way. However, building a replica of a production environment is cost prohibitive. Instead, the pre-production environment should be built to be a scalable version of the actual production environment to both alleviate costs while maintaining technology stack composition and nuances.\n\nMake it easy to get the latest deliverables \nMaking builds readily available to stakeholders and testers can reduce the amount of rework necessary when rebuilding a feature that doesn't meet requirements. Additionally, early testing reduces the chances that defects survive until deployment. Finding errors earlier also, in some cases, reduces the amount of work necessary to resolve them.\n\nEveryone can see the results of the latest build \nIt should be easy to find out where\/whether the build breaks and who made the relevant change.\n\nAutomate deployment \nMost CI systems allow the running of scripts after a build finishes. In most situations, it is possible to write a script to deploy the application to a live test server that everyone can look at. A further advance in this way of thinking is the concept of \"continuous deployment,\" which calls for the software to be deployed directly into production, often with additional automation to prevent defects or regressions.[5][6]\n\nAdvantages and disadvantages \nAdvantages \nContinuous integration has many advantages[4]:\n\n ability to revert the codebase back to a bug-free state, without wasting time debugging, when unit tests fail or a bug emerges;\n ability to detect and fix integration problems continuously, avoiding last-minute chaos at release dates (when everyone tries to check in their slightly incompatible versions);\n early warning of broken\/incompatible code;\n early warning of conflicting changes;\n immediate unit testing of all changes;\n constant availability of a \"current\" build for testing, demo, or release purposes;\n immediate feedback to developers on the quality, functionality, or system-wide impact of code they are writing;\n modular, less complex code often a result of frequent code check-in by developers; and\n metrics generated from automated testing and CI (such as metrics for code coverage, code complexity, and features complete) focus developers on developing functional, quality code, and help develop momentum in a team.\nDisadvantages \n initial setup time required;\n well-developed test-suite required to achieve automated testing advantages;\n large-scale refactoring can be troublesome due to continuously changing code base; and\n hardware costs for build machines can be significant.\nMany teams using CI report that the advantages of CI well outweigh the disadvantages.[7] The effect of finding and fixing integration bugs early in the development process saves both time and money over the lifespan of a project.\n\nSoftware \nTo support continuous integration, software tools such as automated build software can be employed.\nSoftware tools for continuous integration include:\n\n AnthillPro \u2014 continuous integration server by Urbancode\n Apache Continuum \u2014 continuous integration server supporting Apache Maven and Apache Ant. Supports CVS, Subversion, Ant, Maven, and shell scripts\n Apache Gump \u2014 continuous integration tool by Apache\n Automated Build Studio \u2014 proprietary automated build, continuous integration and release management system by AutomatedQA\n Bamboo \u2014 proprietary continuous integration server by Atlassian Software Systems\n BuildBot \u2014 Python\/Twisted-based continuous build system\n BuildForge - proprietary automated build engine by IBM \/ Rational\n BuildMaster \u2014 proprietary application lifecycle management and continuous integration tool by Inedo\n CABIE - Continuous Automated Build and Integration Environment \u2014 open source, written in Perl; works with CVS, Subversion, AccuRev, Bazaar and Perforce\n Cascade \u2014 proprietary continuous integration tool; provides a checkpointing facility to build and test changes before they are committed\n codeBeamer \u2014 proprietary collaboration software with built-in continuous integration features\n CruiseControl \u2014 Java-based framework for a continuous build process\n CruiseControl.NET \u2014 .NET-based automated continuous integration server\n CruiseControl.rb - Lightweight, Ruby-based continuous integration server that can build any codebase, not only Ruby, released under Apache Licence 2.0\n ElectricCommander \u2014 proprietary continuous integration and release management solution from Electric Cloud\n FinalBuilder Server \u2014 proprietary automated build and continuous integration server by VSoft Technologies\n Go \u2014 proprietary agile build and release management software by Thoughtworks\n Jenkins (formerly known as Hudson) \u2014 MIT-licensed, written in Java, runs in servlet container, supports CVS, Subversion, Mercurial, Git, StarTeam, Clearcase, Ant, NAnt, Maven, and shell scripts\n Software Configuration and Library Manager \u2014 software configuration management system for z\/OS by IBM Rational Software\n QuickBuild - proprietary continuous integration server with free community edition featuring build life cycle management and pre-commit verification.\n TeamCity \u2014 proprietary continuous-integration server by JetBrains with free professional edition\n Team Foundation Server \u2014 proprietary continuous integration server and source code repository by Microsoft\n Tinderbox \u2014 Mozilla-based product written in Perl\n Rational Team Concert \u2014 proprietary software development collaboration platform with built-in build engine by IBM including Rational Build Forge\nSee the links to in-depth feature matrix in the external links for deeper comparisons.\n\nFurther reading \n Duvall, P.M.&#32;(2007).&#32;Continuous Integration. Improving Software Quality and Reducing Risk.&#32;Addison-Wesley.&#32;ISBN&#160;0321336380. &#160; &lt;\/ref&gt;\nReferences \n\n\n\u2191 Cunningham, W.&#32;(20 December 2012).&#32;\"Integration Hell\".&#32;WikiWikiWeb.&#32;http:\/\/c2.com\/cgi\/wiki?IntegrationHell .&#32;Retrieved 27 April 2016 . &#160; \n\n\u2191 Brauneis, D.; H\u00fcttermann, M.&#32;(16 January 2010).&#32;\"[OSLC Possible new Working Group - Automation\"].&#32;open-services.net.&#32;http:\/\/open-services.net\/pipermail\/community_open-services.net\/2010-January\/000214.html .&#32;Retrieved 27 April 2016 . &#160; \n\n\u2191 Taylor, B.&#32;(10 February 2009).&#32;\"Rails Deployment and Automation with ShadowPuppet and Capistrano\".&#32;Rails Machine.&#32;Archived&#32;from the original&#32;on 03 March 2011.&#32;https:\/\/web.archive.org\/web\/20110303225845\/http:\/\/blog.railsmachine.com\/articles\/2009\/02\/10\/rails-deployment-and-automation-with-shadowpuppet-and-capistrano .&#32;Retrieved 27 April 2016 . &#160; \n\n\u2191 4.0 4.1 4.2 Fowler, M.&#32;(01 May 2006).&#32;\"Continuous Integration\".&#32;MartinFowler.com.&#32;http:\/\/www.martinfowler.com\/articles\/continuousIntegration.html .&#32;Retrieved 27 April 2016 . &#160; \n\n\u2191 Ries, E.&#32;(30 March 2009).&#32;\"Continuous deployment in 5 easy steps\".&#32;O'Reilly Radar.&#32;O'Reilly Media, Inc.&#32;http:\/\/radar.oreilly.com\/2009\/03\/continuous-deployment-5-eas.html .&#32;Retrieved 27 April 2016 . &#160; \n\n\u2191 Fitz, T.&#32;(10 February 2009).&#32;\"Continuous Deployment at IMVU: Doing the impossible fifty times a day\".&#32;timothyfitz.com.&#32;http:\/\/timothyfitz.com\/2009\/02\/10\/continuous-deployment-at-imvu-doing-the-impossible-fifty-times-a-day\/ .&#32;Retrieved 27 April 2016 . &#160; \n\n\u2191 Richardson, J.&#32;(14 September 2008).&#32;\"Agile Software Testing Strategies at No Fluff Just Stuff Conference\".&#32;Boston, Massachusetts.&#32;https:\/\/nofluffjuststuff.com\/conference\/boston\/2008\/09\/session?id=11645 .&#32;Retrieved 27 April 2016 . &#160; \n\n\nExternal links \n Comparison of continuous integration software \n Continuous integration by Martin Fowler (an introduction)\n Continuous Integration at the Portland Pattern Repository (a collegial discussion)\n Cross platform testing at the Portland Pattern Repository\n Continuous Integration Server Feature Matrix (archived version of guide to tools)\n Continuous Integration: The Cornerstone of a Great Shop by Jared Richardson (an introduction)\n A Recipe for Build Maintainability and Reusability by Jay Flowers\n Continuous Integration anti-patterns by Paul Duvall\n Extreme programming \n\n\n\n\n\n\nSource: <a rel=\"external_link\" class=\"external\" href=\"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Continuous_Integration:_Part_2\">https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Continuous_Integration:_Part_2<\/a>\n\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\tNavigation menu\n\t\t\t\t\t\n\t\t\tViews\n\n\t\t\t\n\t\t\t\t\n\t\t\t\tLII\n\t\t\t\tDiscussion\n\t\t\t\tView source\n\t\t\t\tHistory\n\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\n\t\t\t\t\n\t\t\t\tPersonal tools\n\n\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t\t\tLog in\n\t\t\t\t\t\t\t\t\t\t\t\t\tRequest account\n\t\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\t\n\t\tNavigation\n\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\tMain page\n\t\t\t\t\t\t\t\t\t\t\tRecent changes\n\t\t\t\t\t\t\t\t\t\t\tRandom page\n\t\t\t\t\t\t\t\t\t\t\tHelp\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\n\t\t\t\n\t\t\tSearch\n\n\t\t\t\n\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t&#160;\n\t\t\t\t\t\t\n\t\t\t\t\n\n\t\t\t\t\t\t\t\n\t\t\n\t\t\t\n\t\t\tTools\n\n\t\t\t\n\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\tWhat links here\n\t\t\t\t\t\t\t\t\t\t\tRelated changes\n\t\t\t\t\t\t\t\t\t\t\tSpecial pages\n\t\t\t\t\t\t\t\t\t\t\tPermanent link\n\t\t\t\t\t\t\t\t\t\t\tPage information\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\n\t\t\n\t\tPrint\/export\n\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\tCreate a book\n\t\t\t\t\t\t\t\t\t\t\tDownload as PDF\n\t\t\t\t\t\t\t\t\t\t\tDownload as Plain text\n\t\t\t\t\t\t\t\t\t\t\tPrintable version\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\n\t\t\n\t\tSponsors\n\t\t\n\t\t\t \r\n\n\t\r\n\n\t\r\n\n\t\r\n\n\t\n\t\r\n\n \r\n\n\t\n\t\r\n\n \r\n\n\t\n\t\r\n\n\t\n\t\r\n\n\t\r\n\n\t\r\n\n\t\r\n\t\t\n\t\t\n\t\t\t\n\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t This page was last modified on 27 April 2016, at 21:05.\n\t\t\t\t\t\t\t\t\tThis page has been accessed 897 times.\n\t\t\t\t\t\t\t\t\tContent is available under a Creative Commons Attribution-ShareAlike 4.0 International License unless otherwise noted.\n\t\t\t\t\t\t\t\t\tPrivacy policy\n\t\t\t\t\t\t\t\t\tAbout LIMSWiki\n\t\t\t\t\t\t\t\t\tDisclaimers\n\t\t\t\t\t\t\t\n\t\t\n\t\t\n\t\t\n\n","a2777f409854379de217cacc4dde9d3a_html":"<body class=\"mediawiki ltr sitedir-ltr ns-202 ns-subject page-LII_Medical_Device_Software_Development_with_Continuous_Integration_Continuous_Integration_Part_2 skin-monobook action-view\">\n<div id=\"rdp-ebb-globalWrapper\">\n\t\t<div id=\"rdp-ebb-column-content\">\n\t\t\t<div id=\"rdp-ebb-content\" class=\"mw-body\" role=\"main\">\n\t\t\t\t<a id=\"rdp-ebb-top\"><\/a>\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t<h1 id=\"rdp-ebb-firstHeading\" class=\"firstHeading\" lang=\"en\">LII:Medical Device Software Development with Continuous Integration\/Continuous Integration: Part 2<\/h1>\n\t\t\t\t\n\t\t\t\t<div id=\"rdp-ebb-bodyContent\" class=\"mw-body-content\">\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\n\n\t\t\t\t\t<!-- start content -->\n\t\t\t\t\t<div id=\"rdp-ebb-mw-content-text\" lang=\"en\" dir=\"ltr\" class=\"mw-content-ltr\"><div align=\"center\">-----Return to <a href=\"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\" title=\"LII:Medical Device Software Development with Continuous Integration\" target=\"_blank\" class=\"wiki-link\" data-key=\"3cb3f79774b24a8afa847a72c56c4835\">the beginning<\/a> of this guide-----<\/div>\n<p><br \/>\n<b>Note<\/b>: The following content originates from <a href=\"https:\/\/en.wikibooks.org\/wiki\/Introduction_to_Software_Engineering\/Tools\/Continuous_Integration\" class=\"extiw\" title=\"wikibooks:Introduction to Software Engineering\/Tools\/Continuous Integration\" rel=\"external_link\" target=\"_blank\">a separate source<\/a>. However, it elaborates further on the original author's ideas in the <a href=\"https:\/\/www.limswiki.org\/index.php\/LII:Medical_Device_Software_Development_with_Continuous_Integration\/Continuous_Integration\" title=\"LII:Medical Device Software Development with Continuous Integration\/Continuous Integration\" target=\"_blank\" class=\"wiki-link\" data-key=\"8aa7cf5a1a2f18bd7ab6b7269eff4787\">previous section<\/a> and adds additional information regarding benefits, drawbacks, and tools other than Jenkins, Trac, and Redmine. It has been added and slightly modified under the <a rel=\"external_link\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-sa\/3.0\/\" target=\"_blank\">same license<\/a> as the rest of the content.\n<\/p>\n\n\n<h2><span class=\"mw-headline\" id=\"Theory\">Theory<\/span><\/h2>\n<p>When embarking on a change, a developer takes a copy of the current code base on which to work. As other developers submit changed code to the code repository, this copy gradually ceases to reflect the repository code. When developers submit code to the repository, they must first update their code to reflect the changes in the repository since they took their copy. The more changes the repository contains, the more work developers must do before submitting their own changes.\n<\/p><p>Eventually, the repository may become so different from the developers' baselines that they enter what is sometimes called \"integration hell\",<sup id=\"rdp-ebb-cite_ref-CunninghamInt09_1-0\" class=\"reference\"><a href=\"#cite_note-CunninghamInt09-1\" rel=\"external_link\">[1]<\/a><\/sup> where the time it takes to integrate exceeds the time it took to make their original changes. In a worst-case scenario, developers may have to discard their changes and completely redo the work.\n<\/p><p>Continuous integration involves integrating early and often, so as to avoid the pitfalls of \"integration hell.\" The practice aims to reduce rework and thus reduce cost and time, particularly when automated as a best practice.<sup id=\"rdp-ebb-cite_ref-BrauneisOSLC_2-0\" class=\"reference\"><a href=\"#cite_note-BrauneisOSLC-2\" rel=\"external_link\">[2]<\/a><\/sup><sup id=\"rdp-ebb-cite_ref-TaylorRails09_3-0\" class=\"reference\"><a href=\"#cite_note-TaylorRails09-3\" rel=\"external_link\">[3]<\/a><\/sup>\n<\/p>\n<h2><span class=\"mw-headline\" id=\"Recommended_practices\">Recommended practices<\/span><\/h2>\n<p>Continuous integration should occur frequently enough that no intervening window remains between commit and build, and such that no errors can arise without developers noticing them and correcting them immediately.<sup id=\"rdp-ebb-cite_ref-FowlerCI00_4-0\" class=\"reference\"><a href=\"#cite_note-FowlerCI00-4\" rel=\"external_link\">[4]<\/a><\/sup> Normal practice is to trigger these builds by every commit to a repository, rather than a periodically scheduled build. The practicalities of doing this in a multi-developer environment of rapid commits are such that it's usual to trigger a short timer after each commit, then to start a build when either this timer expires, or after a rather longer interval since the last build. Automated tools such as CruiseControl or Jenkins offer this scheduling automatically.\n<\/p><p>Another factor is the need for a version control system that supports atomic commits, i.e. all of a developer's changes may be seen as a single commit operation. There is no point in trying to build from only half of the changed files.\n<\/p>\n<h3><span class=\"mw-headline\" id=\"Maintain_a_code_repository\">Maintain a code repository<\/span><\/h3>\n<p>This practice advocates the use of a revision control system for the project's source code. All artifacts required to build the project should be placed in the repository. In this practice and in the revision control community, the convention is that the system should be buildable from a fresh checkout and not require additional dependencies. Extreme Programming advocate Martin Fowler also mentions that where branching is supported by tools, its use should be minimized.<sup id=\"rdp-ebb-cite_ref-FowlerCI00_4-1\" class=\"reference\"><a href=\"#cite_note-FowlerCI00-4\" rel=\"external_link\">[4]<\/a><\/sup> Instead, integrating changes is preferred rather than creating multiple versions of the software that are maintained simultaneously. The mainline (or trunk) should be the place for the working version of the software.\n<\/p>\n<h3><span class=\"mw-headline\" id=\"Automate_the_build\">Automate the build<\/span><\/h3>\n<p>A single command should have the capability of building the system. Many build-tools, such as make, have existed for years. Other more recent tools like Ant, Maven, MSBuild or IBM Rational Build Forge are frequently used in continuous integration environments. Automation of the build should include automating the integration, which often includes deployment into a production-like environment. In many cases, the build script not only compiles binaries, but also generates documentation, website pages, statistics, and distribution media (such as Windows MSI files, RPM or DEB files).\n<\/p>\n<h3><span class=\"mw-headline\" id=\"Make_the_build_self-testing\">Make the build self-testing<\/span><\/h3>\n<p>Once the code is built, all tests should run to confirm that it behaves as the developers expect it to behave.\n<\/p>\n<h3><span class=\"mw-headline\" id=\"Everyone_commits_to_the_baseline_every_day\">Everyone commits to the baseline every day<\/span><\/h3>\n<p>By committing regularly, every committer can reduce the number of conflicting changes. Checking in a week's worth of work runs the risk of conflicting with other features and can be very difficult to resolve. Early, small conflicts in an area of the system cause team members to communicate about the change they are making.\n<\/p><p>Many programmers recommend committing all changes at least once a day (once per feature built), and in addition performing a nightly build.\n<\/p>\n<h3><span class=\"mw-headline\" id=\"Every_commit_.28to_baseline.29_should_be_built\">Every commit (to baseline) should be built<\/span><\/h3>\n<p>The system should build commits to the current working version in order to verify that they integrate correctly. A common practice is to use automated continuous integration, although this may be done manually. For many, continuous integration is synonymous with using automated continuous integration where a continuous integration server or daemon monitors the version control system for changes, then automatically runs the build process.\n<\/p>\n<h3><span class=\"mw-headline\" id=\"Keep_the_build_fast\">Keep the build fast<\/span><\/h3>\n<p>The build needs to complete rapidly, so that if there is a problem with integration, it is quickly identified.\n<\/p>\n<h3><span class=\"mw-headline\" id=\"Test_in_a_clone_of_the_production_environment\">Test in a clone of the production environment<\/span><\/h3>\n<p>Having a test environment can lead to failures in tested systems when they deploy in the production environment, because the production environment may differ from the test environment in a significant way. However, building a replica of a production environment is cost prohibitive. Instead, the pre-production environment should be built to be a scalable version of the actual production environment to both alleviate costs while maintaining technology stack composition and nuances.\n<\/p>\n<h3><span class=\"mw-headline\" id=\"Make_it_easy_to_get_the_latest_deliverables\">Make it easy to get the latest deliverables<\/span><\/h3>\n<p>Making builds readily available to stakeholders and testers can reduce the amount of rework necessary when rebuilding a feature that doesn't meet requirements. Additionally, early testing reduces the chances that defects survive until deployment. Finding errors earlier also, in some cases, reduces the amount of work necessary to resolve them.\n<\/p>\n<h3><span class=\"mw-headline\" id=\"Everyone_can_see_the_results_of_the_latest_build\">Everyone can see the results of the latest build<\/span><\/h3>\n<p>It should be easy to find out where\/whether the build breaks and who made the relevant change.\n<\/p>\n<h3><span class=\"mw-headline\" id=\"Automate_deployment\">Automate deployment<\/span><\/h3>\n<p>Most CI systems allow the running of scripts after a build finishes. In most situations, it is possible to write a script to deploy the application to a live test server that everyone can look at. A further advance in this way of thinking is the concept of \"continuous deployment,\" which calls for the software to be deployed directly into production, often with additional automation to prevent defects or regressions.<sup id=\"rdp-ebb-cite_ref-RiesCont09_5-0\" class=\"reference\"><a href=\"#cite_note-RiesCont09-5\" rel=\"external_link\">[5]<\/a><\/sup><sup id=\"rdp-ebb-cite_ref-FitzCont09_6-0\" class=\"reference\"><a href=\"#cite_note-FitzCont09-6\" rel=\"external_link\">[6]<\/a><\/sup>\n<\/p>\n<h2><span class=\"mw-headline\" id=\"Advantages_and_disadvantages\">Advantages and disadvantages<\/span><\/h2>\n<h3><span class=\"mw-headline\" id=\"Advantages\">Advantages<\/span><\/h3>\n<p>Continuous integration has many advantages<sup id=\"rdp-ebb-cite_ref-FowlerCI00_4-2\" class=\"reference\"><a href=\"#cite_note-FowlerCI00-4\" rel=\"external_link\">[4]<\/a><\/sup>:\n<\/p>\n<ul><li> ability to revert the codebase back to a bug-free state, without wasting time debugging, when unit tests fail or a bug emerges;<\/li>\n<li> ability to detect and fix integration problems continuously, avoiding last-minute chaos at release dates (when everyone tries to check in their slightly incompatible versions);<\/li>\n<li> early warning of broken\/incompatible code;<\/li>\n<li> early warning of conflicting changes;<\/li>\n<li> immediate unit testing of all changes;<\/li>\n<li> constant availability of a \"current\" build for testing, demo, or release purposes;<\/li>\n<li> immediate feedback to developers on the quality, functionality, or system-wide impact of code they are writing;<\/li>\n<li> modular, less complex code often a result of frequent code check-in by developers; and<\/li>\n<li> metrics generated from automated testing and CI (such as metrics for code coverage, code complexity, and features complete) focus developers on developing functional, quality code, and help develop momentum in a team.<\/li><\/ul>\n<h3><span class=\"mw-headline\" id=\"Disadvantages\">Disadvantages<\/span><\/h3>\n<ul><li> initial setup time required;<\/li>\n<li> well-developed test-suite required to achieve automated testing advantages;<\/li>\n<li> large-scale refactoring can be troublesome due to continuously changing code base; and<\/li>\n<li> hardware costs for build machines can be significant.<\/li><\/ul>\n<p>Many teams using CI report that the advantages of CI well outweigh the disadvantages.<sup id=\"rdp-ebb-cite_ref-RichardsonAgile08_7-0\" class=\"reference\"><a href=\"#cite_note-RichardsonAgile08-7\" rel=\"external_link\">[7]<\/a><\/sup> The effect of finding and fixing integration bugs early in the development process saves both time and money over the lifespan of a project.\n<\/p>\n<h2><span class=\"mw-headline\" id=\"Software\">Software<\/span><\/h2>\n<p>To support continuous integration, software tools such as automated build software can be employed.\n<\/p><p>Software tools for continuous integration include:\n<\/p>\n<ul><li> AnthillPro \u2014 continuous integration server by Urbancode<\/li>\n<li> Apache Continuum \u2014 continuous integration server supporting Apache Maven and Apache Ant. Supports CVS, Subversion, Ant, Maven, and shell scripts<\/li>\n<li> Apache Gump \u2014 continuous integration tool by Apache<\/li>\n<li> Automated Build Studio \u2014 proprietary automated build, continuous integration and release management system by AutomatedQA<\/li>\n<li> Bamboo \u2014 proprietary continuous integration server by Atlassian Software Systems<\/li>\n<li> BuildBot \u2014 Python\/Twisted-based continuous build system<\/li>\n<li> BuildForge - proprietary automated build engine by IBM \/ Rational<\/li>\n<li> BuildMaster \u2014 proprietary application lifecycle management and continuous integration tool by Inedo<\/li>\n<li> CABIE - Continuous Automated Build and Integration Environment \u2014 open source, written in Perl; works with CVS, Subversion, AccuRev, Bazaar and Perforce<\/li>\n<li> Cascade \u2014 proprietary continuous integration tool; provides a checkpointing facility to build and test changes before they are committed<\/li>\n<li> codeBeamer \u2014 proprietary collaboration software with built-in continuous integration features<\/li>\n<li> CruiseControl \u2014 Java-based framework for a continuous build process<\/li>\n<li> CruiseControl.NET \u2014 .NET-based automated continuous integration server<\/li>\n<li> CruiseControl.rb - Lightweight, Ruby-based continuous integration server that can build any codebase, not only Ruby, released under Apache Licence 2.0<\/li>\n<li> ElectricCommander \u2014 proprietary continuous integration and release management solution from Electric Cloud<\/li>\n<li> FinalBuilder Server \u2014 proprietary automated build and continuous integration server by VSoft Technologies<\/li>\n<li> Go \u2014 proprietary agile build and release management software by Thoughtworks<\/li>\n<li> Jenkins (formerly known as Hudson) \u2014 MIT-licensed, written in Java, runs in servlet container, supports CVS, Subversion, Mercurial, Git, StarTeam, Clearcase, Ant, NAnt, Maven, and shell scripts<\/li>\n<li> Software Configuration and Library Manager \u2014 software configuration management system for z\/OS by IBM Rational Software<\/li>\n<li> QuickBuild - proprietary continuous integration server with free community edition featuring build life cycle management and pre-commit verification.<\/li>\n<li> TeamCity \u2014 proprietary continuous-integration server by JetBrains with free professional edition<\/li>\n<li> Team Foundation Server \u2014 proprietary continuous integration server and source code repository by Microsoft<\/li>\n<li> Tinderbox \u2014 Mozilla-based product written in Perl<\/li>\n<li> Rational Team Concert \u2014 proprietary software development collaboration platform with built-in build engine by IBM including Rational Build Forge<\/li><\/ul>\n<p>See the links to in-depth feature matrix in the external links for deeper comparisons.\n<\/p>\n<h2><span class=\"mw-headline\" id=\"Further_reading\">Further reading<\/span><\/h2>\n<ul><li> <span class=\"citation book\">Duvall, P.M.&#32;(2007).&#32;<i>Continuous Integration. Improving Software Quality and Reducing Risk<\/i>.&#32;Addison-Wesley.&#32;<a rel=\"external_link\" class=\"external text\" href=\"http:\/\/en.wikipedia.org\/wiki\/International_Standard_Book_Number\" target=\"_blank\">ISBN<\/a>&#160;0321336380.<\/span><span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=book&amp;rft.btitle=Continuous+Integration.+Improving+Software+Quality+and+Reducing+Risk&amp;rft.aulast=Duvall%2C+P.M.&amp;rft.au=Duvall%2C+P.M.&amp;rft.date=2007&amp;rft.pub=Addison-Wesley&amp;rft.isbn=0321336380&amp;rfr_id=info:sid\/en.wikipedia.org:LII:Medical_Device_Software_Development_with_Continuous_Integration\/Continuous_Integration:_Part_2\"><span style=\"display: none;\">&#160;<\/span><\/span>&lt;\/ref&gt;<\/li><\/ul>\n<h2><span class=\"mw-headline\" id=\"References\">References<\/span><\/h2>\n<div class=\"reflist\" style=\"list-style-type: decimal;\">\n<ol class=\"references\">\n<li id=\"cite_note-CunninghamInt09-1\"><span class=\"mw-cite-backlink\"><a href=\"#cite_ref-CunninghamInt09_1-0\" rel=\"external_link\">\u2191<\/a><\/span> <span class=\"reference-text\"><span class=\"citation web\">Cunningham, W.&#32;(20 December 2012).&#32;<a rel=\"external_link\" class=\"external text\" href=\"http:\/\/c2.com\/cgi\/wiki?IntegrationHell\" target=\"_blank\">\"Integration Hell\"<\/a>.&#32;<i>WikiWikiWeb<\/i><span class=\"printonly\">.&#32;<a rel=\"external_link\" class=\"external free\" href=\"http:\/\/c2.com\/cgi\/wiki?IntegrationHell\" target=\"_blank\">http:\/\/c2.com\/cgi\/wiki?IntegrationHell<\/a><\/span><span class=\"reference-accessdate\">.&#32;Retrieved 27 April 2016<\/span>.<\/span><span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=bookitem&amp;rft.btitle=Integration+Hell&amp;rft.atitle=WikiWikiWeb&amp;rft.aulast=Cunningham%2C+W.&amp;rft.au=Cunningham%2C+W.&amp;rft.date=20+December+2012&amp;rft_id=http%3A%2F%2Fc2.com%2Fcgi%2Fwiki%3FIntegrationHell&amp;rfr_id=info:sid\/en.wikipedia.org:LII:Medical_Device_Software_Development_with_Continuous_Integration\/Continuous_Integration:_Part_2\"><span style=\"display: none;\">&#160;<\/span><\/span><\/span>\n<\/li>\n<li id=\"cite_note-BrauneisOSLC-2\"><span class=\"mw-cite-backlink\"><a href=\"#cite_ref-BrauneisOSLC_2-0\" rel=\"external_link\">\u2191<\/a><\/span> <span class=\"reference-text\"><span class=\"citation web\">Brauneis, D.; H\u00fcttermann, M.&#32;(16 January 2010).&#32;<a rel=\"external_link\" class=\"external text\" href=\"http:\/\/open-services.net\/pipermail\/community_open-services.net\/2010-January\/000214.html\" target=\"_blank\">\"[OSLC<\/a> Possible new Working Group - Automation\"].&#32;<i>open-services.net<\/i><span class=\"printonly\">.&#32;<a rel=\"external_link\" class=\"external free\" href=\"http:\/\/open-services.net\/pipermail\/community_open-services.net\/2010-January\/000214.html\" target=\"_blank\">http:\/\/open-services.net\/pipermail\/community_open-services.net\/2010-January\/000214.html<\/a><\/span><span class=\"reference-accessdate\">.&#32;Retrieved 27 April 2016<\/span>.<\/span><span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=bookitem&amp;rft.btitle=%5BOSLC%5D+Possible+new+Working+Group+-+Automation&amp;rft.atitle=open-services.net&amp;rft.aulast=Brauneis%2C+D.%3B+H%C3%BCttermann%2C+M.&amp;rft.au=Brauneis%2C+D.%3B+H%C3%BCttermann%2C+M.&amp;rft.date=16+January+2010&amp;rft_id=http%3A%2F%2Fopen-services.net%2Fpipermail%2Fcommunity_open-services.net%2F2010-January%2F000214.html&amp;rfr_id=info:sid\/en.wikipedia.org:LII:Medical_Device_Software_Development_with_Continuous_Integration\/Continuous_Integration:_Part_2\"><span style=\"display: none;\">&#160;<\/span><\/span><\/span>\n<\/li>\n<li id=\"cite_note-TaylorRails09-3\"><span class=\"mw-cite-backlink\"><a href=\"#cite_ref-TaylorRails09_3-0\" rel=\"external_link\">\u2191<\/a><\/span> <span class=\"reference-text\"><span class=\"citation web\">Taylor, B.&#32;(10 February 2009).&#32;<a rel=\"external_link\" class=\"external text\" href=\"https:\/\/web.archive.org\/web\/20110303225845\/http:\/\/blog.railsmachine.com\/articles\/2009\/02\/10\/rails-deployment-and-automation-with-shadowpuppet-and-capistrano\" target=\"_blank\">\"Rails Deployment and Automation with ShadowPuppet and Capistrano\"<\/a>.&#32;<i>Rails Machine<\/i>.&#32;Archived&#32;from <a rel=\"external_link\" class=\"external text\" href=\"http:\/\/blog.railsmachine.com\/articles\/2009\/02\/10\/rails-deployment-and-automation-with-shadowpuppet-and-capistrano\/\" target=\"_blank\">the original<\/a>&#32;on 03 March 2011<span class=\"printonly\">.&#32;<a rel=\"external_link\" class=\"external free\" href=\"https:\/\/web.archive.org\/web\/20110303225845\/http:\/\/blog.railsmachine.com\/articles\/2009\/02\/10\/rails-deployment-and-automation-with-shadowpuppet-and-capistrano\" target=\"_blank\">https:\/\/web.archive.org\/web\/20110303225845\/http:\/\/blog.railsmachine.com\/articles\/2009\/02\/10\/rails-deployment-and-automation-with-shadowpuppet-and-capistrano<\/a><\/span><span class=\"reference-accessdate\">.&#32;Retrieved 27 April 2016<\/span>.<\/span><span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=bookitem&amp;rft.btitle=Rails+Deployment+and+Automation+with+ShadowPuppet+and+Capistrano&amp;rft.atitle=Rails+Machine&amp;rft.aulast=Taylor%2C+B.&amp;rft.au=Taylor%2C+B.&amp;rft.date=10+February+2009&amp;rft_id=https%3A%2F%2Fweb.archive.org%2Fweb%2F20110303225845%2Fhttp%3A%2F%2Fblog.railsmachine.com%2Farticles%2F2009%2F02%2F10%2Frails-deployment-and-automation-with-shadowpuppet-and-capistrano&amp;rfr_id=info:sid\/en.wikipedia.org:LII:Medical_Device_Software_Development_with_Continuous_Integration\/Continuous_Integration:_Part_2\"><span style=\"display: none;\">&#160;<\/span><\/span><\/span>\n<\/li>\n<li id=\"cite_note-FowlerCI00-4\"><span class=\"mw-cite-backlink\">\u2191 <sup><a href=\"#cite_ref-FowlerCI00_4-0\" rel=\"external_link\">4.0<\/a><\/sup> <sup><a href=\"#cite_ref-FowlerCI00_4-1\" rel=\"external_link\">4.1<\/a><\/sup> <sup><a href=\"#cite_ref-FowlerCI00_4-2\" rel=\"external_link\">4.2<\/a><\/sup><\/span> <span class=\"reference-text\"><span class=\"citation web\">Fowler, M.&#32;(01 May 2006).&#32;<a rel=\"external_link\" class=\"external text\" href=\"http:\/\/www.martinfowler.com\/articles\/continuousIntegration.html\" target=\"_blank\">\"Continuous Integration\"<\/a>.&#32;<i>MartinFowler.com<\/i><span class=\"printonly\">.&#32;<a rel=\"external_link\" class=\"external free\" href=\"http:\/\/www.martinfowler.com\/articles\/continuousIntegration.html\" target=\"_blank\">http:\/\/www.martinfowler.com\/articles\/continuousIntegration.html<\/a><\/span><span class=\"reference-accessdate\">.&#32;Retrieved 27 April 2016<\/span>.<\/span><span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=bookitem&amp;rft.btitle=Continuous+Integration&amp;rft.atitle=MartinFowler.com&amp;rft.aulast=Fowler%2C+M.&amp;rft.au=Fowler%2C+M.&amp;rft.date=01+May+2006&amp;rft_id=http%3A%2F%2Fwww.martinfowler.com%2Farticles%2FcontinuousIntegration.html&amp;rfr_id=info:sid\/en.wikipedia.org:LII:Medical_Device_Software_Development_with_Continuous_Integration\/Continuous_Integration:_Part_2\"><span style=\"display: none;\">&#160;<\/span><\/span><\/span>\n<\/li>\n<li id=\"cite_note-RiesCont09-5\"><span class=\"mw-cite-backlink\"><a href=\"#cite_ref-RiesCont09_5-0\" rel=\"external_link\">\u2191<\/a><\/span> <span class=\"reference-text\"><span class=\"citation web\">Ries, E.&#32;(30 March 2009).&#32;<a rel=\"external_link\" class=\"external text\" href=\"http:\/\/radar.oreilly.com\/2009\/03\/continuous-deployment-5-eas.html\" target=\"_blank\">\"Continuous deployment in 5 easy steps\"<\/a>.&#32;<i>O'Reilly Radar<\/i>.&#32;O'Reilly Media, Inc<span class=\"printonly\">.&#32;<a rel=\"external_link\" class=\"external free\" href=\"http:\/\/radar.oreilly.com\/2009\/03\/continuous-deployment-5-eas.html\" target=\"_blank\">http:\/\/radar.oreilly.com\/2009\/03\/continuous-deployment-5-eas.html<\/a><\/span><span class=\"reference-accessdate\">.&#32;Retrieved 27 April 2016<\/span>.<\/span><span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=bookitem&amp;rft.btitle=Continuous+deployment+in+5+easy+steps&amp;rft.atitle=O%27Reilly+Radar&amp;rft.aulast=Ries%2C+E.&amp;rft.au=Ries%2C+E.&amp;rft.date=30+March+2009&amp;rft.pub=O%27Reilly+Media%2C+Inc&amp;rft_id=http%3A%2F%2Fradar.oreilly.com%2F2009%2F03%2Fcontinuous-deployment-5-eas.html&amp;rfr_id=info:sid\/en.wikipedia.org:LII:Medical_Device_Software_Development_with_Continuous_Integration\/Continuous_Integration:_Part_2\"><span style=\"display: none;\">&#160;<\/span><\/span><\/span>\n<\/li>\n<li id=\"cite_note-FitzCont09-6\"><span class=\"mw-cite-backlink\"><a href=\"#cite_ref-FitzCont09_6-0\" rel=\"external_link\">\u2191<\/a><\/span> <span class=\"reference-text\"><span class=\"citation web\">Fitz, T.&#32;(10 February 2009).&#32;<a rel=\"external_link\" class=\"external text\" href=\"http:\/\/timothyfitz.com\/2009\/02\/10\/continuous-deployment-at-imvu-doing-the-impossible-fifty-times-a-day\/\" target=\"_blank\">\"Continuous Deployment at IMVU: Doing the impossible fifty times a day\"<\/a>.&#32;<i>timothyfitz.com<\/i><span class=\"printonly\">.&#32;<a rel=\"external_link\" class=\"external free\" href=\"http:\/\/timothyfitz.com\/2009\/02\/10\/continuous-deployment-at-imvu-doing-the-impossible-fifty-times-a-day\/\" target=\"_blank\">http:\/\/timothyfitz.com\/2009\/02\/10\/continuous-deployment-at-imvu-doing-the-impossible-fifty-times-a-day\/<\/a><\/span><span class=\"reference-accessdate\">.&#32;Retrieved 27 April 2016<\/span>.<\/span><span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=bookitem&amp;rft.btitle=Continuous+Deployment+at+IMVU%3A+Doing+the+impossible+fifty+times+a+day&amp;rft.atitle=timothyfitz.com&amp;rft.aulast=Fitz%2C+T.&amp;rft.au=Fitz%2C+T.&amp;rft.date=10+February+2009&amp;rft_id=http%3A%2F%2Ftimothyfitz.com%2F2009%2F02%2F10%2Fcontinuous-deployment-at-imvu-doing-the-impossible-fifty-times-a-day%2F&amp;rfr_id=info:sid\/en.wikipedia.org:LII:Medical_Device_Software_Development_with_Continuous_Integration\/Continuous_Integration:_Part_2\"><span style=\"display: none;\">&#160;<\/span><\/span><\/span>\n<\/li>\n<li id=\"cite_note-RichardsonAgile08-7\"><span class=\"mw-cite-backlink\"><a href=\"#cite_ref-RichardsonAgile08_7-0\" rel=\"external_link\">\u2191<\/a><\/span> <span cl