OUR PRINCIPAL METHOD 
                for finding errors in implementing the specification is to take 
                on the mentality of the most devious and obtuse user we have ever 
                encountered and intentionally try to break the product by doing 
                incorrectly everything the instruction manual says to do. Essentially 
                we are exploring the boundary cases of the specification by using 
                this method.
               We also test more formally for:
               
                
              
                - bugs found in a previous release
 
                -  errors (and/or usability issues) found in an equivalent product 
                  from another vendor
 
                - errors which typically arise from poor programming practices 
                  we've seen at other companies while consulting for them
 
                -  errors which might come from potential limitations of the 
                  underlying algorithms used in the product
 
              
                
              A list of such previous errors we have collected for desktop 
                speech products can be found here.
              In addition to experienced testers, when appropriate, we also 
                employ naive testers. Their inexpert understanding of both the 
                technology and computers in general cause them to generate sequences 
                of operations which are so outside of the norm as to set earthquakes 
                into even the most robust code.
                
              If you have speech applications under development and wish to 
                have an analysis performed along these lines please contact 
                us. 
               More on usability...
                More on expandability...