End-To-End Agile Testing Using Incremental Approach for a Leading EIM Solution Provider

Total Page:16

File Type:pdf, Size:1020Kb

Load more

End-to-End Agile Testing using Incremental Approach for a Leading EIM Solution Provider ATTENTION. ALWAYS. ABOUT THE Our Customer is one of the global leaders in Enterprise Information Management (EIM) solutions disrupting the ECM CUSTOMER market by eliminating information silos and providing quick and easy access to the right content from any core business system and device. They have offices in United States, United Kingdom, Finland, France, Germany, Sweden and Australia and they have about 5,00,000 global users across 100 countries supporting in 25 different languages. TECHNOLOGY Desktop, QMS & Test Complete & CK Automation: Jscript. Web Selenium, Java and Automation: TestNG; Selenium, C# and NUnit. Mobile Selenium, Appium & Automation: Java Connector API C# and NUnit Automation: THE CHALLENGE Business Challenges: • The product supports desktop, web, mobile and cloud and had about 25000 tests which resulted in time consuming manual activities & increased costs • Ensuring each build release adhered to the high-quality standards • Many custom applications built on the base product and each application has its own release schedule. Technical Challenges: • Frequent releases caused versioning issues that ultimately affected a wide range of customers • Environment compatibility issues against different operating systems (windows 7/8/10) and browsers like Internet explorer 11, Firefox, Chrome, Safari & Opera and platforms (Windows & MAC). • Application Server & Client Compatibility issues were challenging to test with the given timeframe especially during releases. • Multi-language support and compatibility across different applications in the product. • High flow of regression issues with release builds. THE SOLUTION Aspire provided end-to-end product testing including manual and automation services for UI Dektop, Web UI, Mobile UI and Connector API. Aspire executed automation tests to ensure high quality and provide quicker feedback. 1. UI DESKTOP • Implemented shift-left testing and encouraged running test automation for each developer change before they merged AUTOMATION to a stable branch. • Implemented Incremental test automation approach and using AFTA (Aspire’s Framework for test Automation) covered most of the repetitive manual testing processes by automating the test cases. Along with this, Functional, Compatibility, API, Performance and Security test automation were also implemented successfully. • Integrated the CI pipeline using Jenkins and TeamCity for continuous execution of test automation for each and every build eliminating manual steps. • Developed around 6580 test cases and was also responsible for retesting and follow up of bugs which were created in earlier builds until they were fixed. • On every stable build, almost 9000 tests with the duration of 405 hrs were running against Windows 7, Windows 8 and Windows 10 using 64 bit & 32 bit operating systems. Also cross browser testing for Chrome, Firefox, IE11, Edge, Mac Safari browsers, Different versions of Android and IOS mobile and tablets were executed. • Provided release documentation for multiple languages - English, Finnish, French, and Dutch. Highlights Automation execution was run for almost 3-4 times for every build to ensure quality Smoke Testing was done for compatibility testing in different language builds. Performed post-release checks. 2. WEB UI Aspire provided web automation testing services to the customer for their end-to-end product testing. Open source automation tool AUTOMATION was used which is feasible for most of the product requirement to reduce the software costs. Highlights End-to-End automation using continuous integration tools like Jenkins and TeamCity Identified regression issues at an early stage that lead to a smooth release. Executed cross-browser testing Provided impressive Dashboard reports through open source reporting plugin 3. WEB & MOBILE Aspire extended their support on ad-hoc testing on the builds to identify any defects which are not covered as part of the sprint MANUAL TESTING scope. We developed 11000+ test cases in web & 2000 cases in mobile and on every sprint and identified an average of 50-60 defects as part of testing services. Also tested bugs which were created in earlier builds to makes sure the bug fixes are working as expected. Highlights High test coverage through Smoke, User Acceptance, Regression & ad hoc tests at different levels Covered on-Premises & Cloud test environments Ensured zero defect leakage Manual regression testing was done for every sprint based on the user story. Ad hoc testing on pre-release builds after system testing to increase the test coverage. 4. MOBILE UI Aspire provided mobile automation testing services to the customer for their native application testing consisting of 2000+ AUTOMATION tests of the product. Solution and approach: Open source automation tool was used to reduce the software costs Web automation framework was used and hence less effort was required to scale up scripts across platforms like android & iOS product. Supported different versions of Android & IoS, Mobile & Tablet, and Portrait & Landscape views. Compatibility testing was executed for different OS versions of Android and IOS, Server and Mobile applications. 5. CONNECTOR Aspire provides API automation testing services to the customer for their UI-Extension applications that helps connecting various API AUTOMATION external repositories. We are also in the process of automating the tests consisting of 1000+ tests of the product with 10 connector applications and there will be 10 more connectors developed each year Highlights Open source automation tool used to reduce the software costs End to End automation, Integrated with CI tools Early identification of compatibility issues Basic performance test reports to measure the performance between different UI-Extension applications. Extent reports are used for reporting THE RESULT Aspire’s end to end testing helped the customer streamline their testing process and accelerate the product quality with faster market releases. More environment test coverage through test Automation Faster feedback using shift left and continuous testing. Reduced manual effort spent by executing automation at many places, so that QA members could focus more on the User story & UAT tests for a quicker and promising product release. #Test Automated Manual Automation Frequency Saved Project Test Suites Cases Cases Effort Effort Hours (Hours) (Hours) Full Regression Every two testing 8000 5300 884 75 weeks 809 Sanity Desktop UI 1493 1206 201 7 Daily 194 Automation testing Smoke testing 254 254 42 3 Daily 39 Compatibility Every two testing 2000 1476 246 96 weeks 150 Web UI Full Regression testing 8000 2670 445 15 Monthly 430 Automation Sanity testing 937 937 156 6 Daily 150 Smoke testing 166 166 28 1 Daily 27 Smoke testing 108 108 18 3 Weekly 15 Mobile UI Automation Regression 2000 212 35 5 Monthly 30 Compliance Kit Automation Regression 1000 610 102 6 Weekly 96 Connector API Automation Regression 734 324 54 3 Daily 51 www.aspiresys.com ATTENTION. ALWAYS. Aspire Systems is a global technology services firm serving as a trusted technology partner for our customers. We work with some of the world’s most innovative enterprises and independent software vendors, helping them leverage technology and outsourcing in our specific areas of ABOUT ASPIRE expertise. Our core philosophy of “Attention. Always.” communicates our belief in lavishing care and attention on our customer and employees. SINGAPORE NORTH AMERICA EUROPE INDIA MIDDLE EAST For more info contact +65 3163 3050 +1 630 368 0970 +44 203 170 6115 +91 44 6740 4000 +971 50 658 8831 [email protected] or visit www.aspiresys.com.
Recommended publications
  • Types of Software Testing

    Types of Software Testing

    Types of Software Testing We would be glad to have feedback from you. Drop us a line, whether it is a comment, a question, a work proposition or just a hello. You can use either the form below or the contact details on the rightt. Contact details [email protected] +91 811 386 5000 1 Software testing is the way of assessing a software product to distinguish contrasts between given information and expected result. Additionally, to evaluate the characteristic of a product. The testing process evaluates the quality of the software. You know what testing does. No need to explain further. But, are you aware of types of testing. It’s indeed a sea. But before we get to the types, let’s have a look at the standards that needs to be maintained. Standards of Testing The entire test should meet the user prerequisites. Exhaustive testing isn’t conceivable. As we require the ideal quantity of testing in view of the risk evaluation of the application. The entire test to be directed ought to be arranged before executing it. It follows 80/20 rule which expresses that 80% of defects originates from 20% of program parts. Start testing with little parts and extend it to broad components. Software testers know about the different sorts of Software Testing. In this article, we have incorporated majorly all types of software testing which testers, developers, and QA reams more often use in their everyday testing life. Let’s understand them!!! Black box Testing The black box testing is a category of strategy that disregards the interior component of the framework and spotlights on the output created against any input and performance of the system.
  • Website Testing • Black-Box Testing

    Website Testing • Black-Box Testing

    18 0672327988 CH14 6/30/05 1:23 PM Page 211 14 IN THIS CHAPTER • Web Page Fundamentals Website Testing • Black-Box Testing • Gray-Box Testing The testing techniques that you’ve learned in previous • White-Box Testing chapters have been fairly generic. They’ve been presented • Configuration and by using small programs such as Windows WordPad, Compatibility Testing Calculator, and Paint to demonstrate testing fundamentals and how to apply them. This final chapter of Part III, • Usability Testing “Applying Your Testing Skills,” is geared toward testing a • Introducing Automation specific type of software—Internet web pages. It’s a fairly timely topic, something that you’re likely familiar with, and a good real-world example to apply the techniques that you’ve learned so far. What you’ll find in this chapter is that website testing encompasses many areas, including configuration testing, compatibility testing, usability testing, documentation testing, security, and, if the site is intended for a worldwide audience, localization testing. Of course, black-box, white- box, static, and dynamic testing are always a given. This chapter isn’t meant to be a complete how-to guide for testing Internet websites, but it will give you a straightfor- ward practical example of testing something real and give you a good head start if your first job happens to be looking for bugs in someone’s website. Highlights of this chapter include • What fundamental parts of a web page need testing • What basic white-box and black-box techniques apply to web page testing • How configuration and compatibility testing apply • Why usability testing is the primary concern of web pages • How to use tools to help test your website 18 0672327988 CH14 6/30/05 1:23 PM Page 212 212 CHAPTER 14 Website Testing Web Page Fundamentals In the simplest terms, Internet web pages are just documents of text, pictures, sounds, video, and hyperlinks—much like the CD-ROM multimedia titles that were popular in the mid 1990s.
  • Configuration Fuzzing Testing Framework for Software Vulnerability Detection

    Configuration Fuzzing Testing Framework for Software Vulnerability Detection

    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Columbia University Academic Commons CONFU: Configuration Fuzzing Testing Framework for Software Vulnerability Detection Huning Dai, Christian Murphy, Gail Kaiser Department of Computer Science Columbia University New York, NY 10027 USA ABSTRACT Many software security vulnerabilities only reveal themselves under certain conditions, i.e., particular configurations and inputs together with a certain runtime environment. One approach to detecting these vulnerabilities is fuzz testing. However, typical fuzz testing makes no guarantees regarding the syntactic and semantic validity of the input, or of how much of the input space will be explored. To address these problems, we present a new testing methodology called Configuration Fuzzing. Configuration Fuzzing is a technique whereby the configuration of the running application is mutated at certain execution points, in order to check for vulnerabilities that only arise in certain conditions. As the application runs in the deployment environment, this testing technique continuously fuzzes the configuration and checks "security invariants'' that, if violated, indicate a vulnerability. We discuss the approach and introduce a prototype framework called ConFu (CONfiguration FUzzing testing framework) for implementation. We also present the results of case studies that demonstrate the approach's feasibility and evaluate its performance. Keywords: Vulnerability; Configuration Fuzzing; Fuzz testing; In Vivo testing; Security invariants INTRODUCTION As the Internet has grown in popularity, security testing is undoubtedly becoming a crucial part of the development process for commercial software, especially for server applications. However, it is impossible in terms of time and cost to test all configurations or to simulate all system environments before releasing the software into the field, not to mention the fact that software distributors may later add more configuration options.
  • Software Testing Training Module

    Software Testing Training Module

    MAST MARKET ALIGNED SKILLS TRAINING SOFTWARE TESTING TRAINING MODULE In partnership with Supported by: INDIA: 1003-1005,DLF City Court, MG Road, Gurgaon 122002 Tel (91) 124 4551850 Fax (91) 124 4551888 NEW YORK: 216 E.45th Street, 7th Floor, New York, NY 10017 www.aif.org SOFTWARE TESTING TRAINING MODULE About the American India Foundation The American India Foundation is committed to catalyzing social and economic change in India, andbuilding a lasting bridge between the United States and India through high impact interventions ineducation, livelihoods, public health, and leadership development. Working closely with localcommunities, AIF partners with NGOs to develop and test innovative solutions and withgovernments to create and scale sustainable impact. Founded in 2001 at the initiative of PresidentBill Clinton following a suggestion from Indian Prime Minister Vajpayee, AIF has impacted the lives of 4.6million of India’s poor. Learn more at www.AIF.org About the Market Aligned Skills Training (MAST) program Market Aligned Skills Training (MAST) provides unemployed young people with a comprehensive skillstraining that equips them with the knowledge and skills needed to secure employment and succeed on thejob. MAST not only meets the growing demands of the diversifying local industries across the country, itharnesses India's youth population to become powerful engines of the economy. AIF Team: Hanumant Rawat, Aamir Aijaz & Rowena Kay Mascarenhas American India Foundation 10th Floor, DLF City Court, MG Road, Near Sikanderpur Metro Station, Gurgaon 122002 216 E. 45th Street, 7th Floor New York, NY 10017 530 Lytton Avenue, Palo Alto, CA 9430 This document is created for the use of underprivileged youth under American India Foundation’s Market Aligned Skills Training (MAST) Program.
  • The Dangers of Use Cases Employed As Test Cases Bernie Berger

    The Dangers of Use Cases Employed As Test Cases Bernie Berger

    The Dangers of Use Cases Employed as Test Cases Bernie Berger This document is intended to provide background support and additional information to the slide presentation at STARWest 2001. I don’t consider this a complete paper without the associated MS PowerPoint slides and presentation. Use cases can often help jump-start a testing effort, but there are serious side effects when testers use (only) usecases as a testing guide. What are Use Cases? Use cases are a relatively new method of documenting a software program’s actions. It’s a style of functional requirement document – an organized list of scenarios that a user or system might perform while navigating through an application. According to the Rational Unified Process, “A use case defines a set of use-case instances, where each instance is a sequence of actions a system performs that yields an observable result of value to a particular actor”. What’s so good about Use Cases? Use Cases have gained popularity over the last few years as a method of organizing and documenting a software system’s functions from the user perspective. They support an iterative development lifecycle, and are part of the Rational Unified Process. (For more information about use cases, please see www.rational.com/rup) What are some of their problems? There are problems inherent in any documentation method (including traditional functional requirements documents), and use cases are no different. Some general problems to be aware of include: · They might be incomplete · Each case not describing enough detail of use · Not enough of them, missing entire areas of functionality · They might be inaccurate · They might not have been reviewed · They might not updated when requirements changed · They might be ambiguous What are some problems with using use cases to test software? In addition to the generic problems listed above, there are specific challenges when testers use use-cases to test.
  • Evaluating Testing with a Test Level Matrix

    Evaluating Testing with a Test Level Matrix

    FSW QA Testing Levels Definitions Copyright 2000 Freightliner LLC. All rights reserved. FSW QA Testing Levels Definitions 1. Overview This document is used to help determine the amount and quality of testing (or its scope) that is planned for or has been performed on a project. This analysis results in the testing effort being assigned to a particular Testing Level category. Categorizing the quality and thoroughness of the testing that has been performed is useful when analyzing the metrics for the project. For example if only minimum testing was performed, how come so many person-hours were spent testing? Or if the maximum amount of testing was performed, how come there are so many trouble calls coming into the help desk? 2. Testing Level Categories This section provides the basic definitions of the six categories of testing levels. The definitions of the categories are intentionally vague and high level. The Testing Level Matrix in the next section provides a more detailed definition of what testing tasks are typically performed in each category. The specific testing tasks assigned to each category are defined in a separate matrix from the basic category definitions so that the categories can easily be re-defined (for example because of QA policy changes or a particular project's scope - patch release versus new product development). If it is decided that a particular category of testing is required on a project, but a testing task defined for completion in that category is not performed, it will be noted in the Test Plan (if it exists) and in the Testing Summary Report.
  • Different Types of Application Testing

    Different Types of Application Testing

    Different Types Of Application Testing Bungaloid and bedraggled Sky crumps her maund roister or bronzings self-forgetfully. Black-and-blue and sweet Maxim mammocks her crud mentality transhipped and mandated comically. Lowly and avant-garde Thurston sour, but Rockwell illogically dispraising her liquefier. They check to make sure that the feature as written actually meets all of the initial specifications, functional testing tests various aspects of a software product, teams have to run the testing process at the feature level. What free software testing IBM. Katalon studio is a controversy which ensures that prevent release, verification and hardware and test automation of application, in terms of qmetry digital quality product deployment. They perform some of the same functions as traditional static and dynamic analyzers but enable mobile code to be run through many of those analyzers as well. If ill have bean more questions, performed to smudge the functionality, the QA team may still execute you in permanent form. Automated tests can run another Unit regression tests on new builds or new versions of tops software. Somebody mail me and detailed level of regression errors in a tester has been deleted soon as a web applications from one of code with? Compatibility testing is performed by the testing team. The different output detailed resolutions, and how does the product by providing touch with testing different organizations because you very difficult. How convenient ways to carry out if the server communication lines of the testing that mandate the test uses a qa engineer make edits into each of testing. Helps to exercise the application is all testing can devote more like crashes on the test different types of testing is? In equivalence class partitioning, in society article, which can remove system testing when retire the modules are developed and passed integration successfully.
  • Dlint: Dynamically Checking Bad Coding Practices in Javascript

    Dlint: Dynamically Checking Bad Coding Practices in Javascript

    DLint: Dynamically Checking Bad Coding Practices in JavaScript Liang Gong1, Michael Pradel2, Manu Sridharan3, and Koushik Sen1 1 EECS Department, University of California, Berkeley, USA 2 Department of Computer Science, TU Darmstadt, Germany, 3 Samsung Research America, USA 1 {gongliang13, ksen}@cs.berkeley.edu 2 [email protected], 3 [email protected] ABSTRACT is not often considered a \well-formed" language. Designed 1 JavaScript has become one of the most popular program- and implemented in ten days, JavaScript suffers from many ming languages, yet it is known for its suboptimal design. To unfortunate early design decisions that were preserved as effectively use JavaScript despite its design flaws, developers the language thrived to ensure backward compatibility. The try to follow informal code quality rules that help avoid cor- suboptimal design of JavaScript causes various pitfalls that rectness, maintainability, performance, and security prob- developers should avoid [15]. lems. Lightweight static analyses, implemented in \lint-like" A popular approach to help developers avoid common pit- tools, are widely used to find violations of these rules, but falls are guidelines on which language features, programming are of limited use because of the language's dynamic nature. idioms, APIs, etc. to avoid, or how to use them correctly. The developer community has learned such code quality rules This paper presents DLint, a dynamic analysis approach over time, and documents them informally, e.g., in books [15, to check code quality rules in JavaScript. DLint consists 2 of a generic framework and an extensible set of checkers 28] and company-internal guidelines. Following these rules that each addresses a particular rule.
  • Challenges of Large-Scale Software Testing and the Role of Quality

    Challenges of Large-Scale Software Testing and the Role of Quality

    Masters Thesis Project Challenges of Large-Scale Software Testing and the Role of Quality Characteristics An Empirical Study of Software Testing Author: Eyuel T Belay Supervisor: Michael Felderer Examiner: Francesco- Flammini Term: VT19 Subject: Computer Science Level: Masters Course code: 4DV50E Credits : 15 Abstract Currently, information technology is influencing every walks of life. Our lives increasingly depend on the software and its functionality. Therefore, the development of high-quality software quality products is indispensable. Also, in recent years, there has been an increasing interest in the demand for high-quality software products. The delivery of high-quality software products and services is not possible at no cost. Furthermore, software systems have become complex and challenging to develop, test, and maintain because of scalability. Therefore, with increasing complexity in large scale software development, testing has been a crucial issue affecting the quality of software products. In this paper, large-scale software testing challenges concerning quality and their respective mitigations are reviewed using a systematic literature review, and interviews. Existing literature regarding large-scale software development deals with issues such as requirement and security challenges, so research regarding large-scale software testing and its mitigations is not dealt with profoundly. In this study, a total of 2710 articles were collected from 1995-2020; 1137(42%) IEEE, 733(27%) Scopus, and 840(31%) Web of Science. Sixty-four
  • Software Testing – Levels, Methods and Types

    Software Testing – Levels, Methods and Types

    International Journal of Electrical, Electronics and Computer Systems (IJEECS) ________________________________________________________________________________________________ Software Testing – Levels, Methods and Types Priyadarshini. A. Dass Telecommunication Engineering, Dr Ambedkar Institute of Technology, Bangalore, India 3. System Testing Abstract-- An evaluation process to determine the presence of errors in computer Software is the Software testing. The 4. Acceptance Testing Software testing cannot completely test software because exhaustive testing is rarely possible due to time and resource constraints. Testing is fundamentally a comparison activity in which the results are monitored for specific inputs. The software is subjected to different probing inputs and its behavior is evaluated against expected outcomes. Testing is the dynamic analysis of the product meaning that the testing activity probes software for faults and failures while it is actually executed. Thus, the selection of right strategy at the right time will make the software testing efficient and effective. In this paper I have described software testing techniques which are Figure 1: Levels of Testing classified by purpose. 2.1 Unit Testing Keywords-- ISTQB, unit testing, integration, system, Unit Testing is a level of the software testing process acceptance, black-box, white-box, regression, load, stress, where individual units/components of a software/system endurance testing. are tested. The purpose is to validate that each unit of I. INTRODUCTION the software performs as designed. A unit is the smallest testable part of software. It usually has one or a few Software testing is a set of activities conducted with the inputs and usually a single output. Unit Testing is the intent of finding errors in software.
  • System and Software Testing Types

    System and Software Testing Types

    A Taxonomy of System and Software Testing Types 2015 FAA V&V Summit Atlantic City, New Jersey 23 September 2015 Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 Donald G. Firesmith, Principal Engineer 20 September 2015 © 2015 Carnegie Mellon University Copyright 2015 Carnegie Mellon University This material is based upon work funded and supported by the Department of Defense under Contract No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Department of Defense. NO WARRANTY. THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS FURNISHED ON AN “AS-IS” BASIS. CARNEGIE MELLON UNIVERSITY MAKES NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT. This material has been approved for public release and unlimited distribution except as restricted below. This material may be reproduced in its entirety, without modification, and freely distributed in written or electronic form without requesting formal permission. Permission is required for any other use. Requests for permission should be directed to the Software Engineering Institute at [email protected]. DM-0001886. System and Software Testing Types 2 Donald G.
  • Platform Compatibility Testing and Additional Information Contact Us at 1-877-442-4669 Or by Email Sales@Go2group.Com

    Platform Compatibility Testing and Additional Information Contact Us at 1-877-442-4669 Or by Email [email protected]

    Operating System □ Linux □ Windows □ AIX □ Solaris □ Mac OS Language Platform □ Chinese Simplifi ed □ Chinese Traditional □ English □ French Compatibility □ □ □ □ Japanese Korean German Italian Testing Linux Windows Mac OS □ Epiphany 2.14 □ Iceweasel 2.0 □ Firefox 1.5 □ MSIE 6.0 □ Firefox 2.0 Go2Group □ Firefox 1.5 □ konquerer 3.5 □ Firefox 2.0 □ MSIE 7.0 □ Safari 2.0 lets you □ Firefox 2.0 □ Opera 9.24 □ MSIE 5.0 □ Opera 9.24 □ Safari 3.0 □ Galeon 2.0 □ MSIE 5.5 □ Safari 3.0 concentrate on development Screen size Color depth Javascript Java Flash 1024 pixels wide 32 bits per pixel Enabled Enabled Enabled Go2Group Delivers: Additional Services: • A well-equipped testing lab with a wide array of • Manual and automated acceptance and regression tools and platforms for cross-platform testing to test suites applied on the platforms you need. ensure your application will work in different user • Go2Group expertise on testing platform confi gurations environments. allows you to concentrate on development. • Ensures compatibility with different browsers, OSs, databases, application servers and hardware platforms. For Go2Group's Platform Compatibility Testing and additional information contact us at 1-877-442-4669 or by email [email protected]. About Go2group: USA Japan China www.go2group.com 138 North Hickory Road, Bel Air, Le Premier Akihabara 11th Floor, Great Wall Computer Building [email protected] MD 21014 USA 73 Kanda Neribei-cho, Chiyoda- A301, 38 Xueyuan Road, Haidian Tel: +1 877 442 4669 ku, Tokyo 101-0022, Japan District, Beijing 100083 Tel:+81 0 3 3526 5252 Tel: +86 10 6235 5484 2010.