Skip to main content

Core Features and Functionalities

Strategy Studio: Generating Your Core Documents

The Strategy Studio is your gateway to leveraging TestingAIde's AI for foundational document creation within your project. Here, you can generate comprehensive Product Requirement Documents (PRDs), Test Strategies, and Test Plans, ensuring a solid and consistent starting point for your testing efforts.

TestingAIde provides flexible options for initiating your document generation: by uploading an existing PRD, or by generating a PRD directly from your codebase via GitHub.

Generating a PRD from Codebase (via GitHub)

This method allows TestingAIde to analyze your existing code and documentation within a GitHub repository to automatically generate an initial PRD draft, saving significant time.

  1. Select Document Type: Navigate to the Strategy Studio from the Project Sidebar. Click on "Generate PRD from Codebase"
  2. If your repo is private, select the Private Repo checkbox and enter your GitHub token

Click "Submit" to allow TestingAIde to process your codebase.

TestingAIde's AI will analyze your codebase and generate a draft PRD. Review the generated document.

PRD Content View (Example):

Uploading an Existing PRD

If you already have a PRD document (e.g., in PDF, DOCX, or text format), you can upload it directly to TestingAIde for the AI to parse and use as a basis for subsequent document generation (Test Plans, Test Strategies, Test Cases).

  1. Select Document Type: Navigate to the Strategy Studio and select "Generate PRD."
  2. Choose : "Upload Documents"

  1. Upload Your PRD File via the popup - DragNDrop the files or click "Upload from Device" button to select your PRD document from your local machine.

TestingAIde will process the file.

Generating Test Strategy and Test Plan

Once your PRD is in TestingAIde (either generated or uploaded), you can seamlessly proceed to create your Test Strategy and Test Plan.

  1. From the Strategy Studio, choose either "Generate Test Strategy" or "Test Plan Creation."
  2. Link to PRD: TestingAIde will prompt you to link the document to an existing PRD. This ensures that the generated strategy and plan are contextually aligned with the product requirements.
  3. If You're Generating a Test Plan:
    1. Select a Strategy: Choose an existing test strategy or create a new one.
    2. Connect JIRA: Select your preconfigured JIRA credentials to fetch user stories automatically.
    3. Choose Release Version: Select the relevant release for which you want to generate the test plan.
    4. Pick Epics and User Stories: Choose the specific epics and stories to include in the plan.

This structured process helps TestingAIde generate accurate and actionable testing documents tailored to your product requirements.

Case Composer: Craft your Testcases

With your Test Strategy and Test Plan in place, TestingAIde enables you to generate high-quality test cases based on your user stories and descriptions.

Generate Test Cases Using AI

1. Feed Artifacts (Optional but Recommended): Upload supporting artifacts such as:

  • PRD (Product Requirement Document)
  • Video Flow (demo/walkthrough of the feature)

These artifacts help TestingAIde's engine generate more accurate and context-aware test cases. While not mandatory, providing them significantly improves coverage and relevance.

2. Select JIRA or Azure Stories with the Domain: Choose the relevant JIRA or Azure user stories using your pre-configured credentials. Pick the Relevant Test Case Domain: Mobile or Web

3. Enhance with your Context Aware AI:

Optionally, select a fine-tuned LLM (Large Language Model) and enable RAG (Retrieval-Augmented Generation) to improve the relevance and detail of the generated test scripts.

4. Generate Test Cases: If you prefer to use the platform's default AI model, simply click Confirm. TestingAIde's AI engine will then automatically generate comprehensive, context-aware test cases for your application.

Import Your Own Test Cases

If you would like to use your existing test cases, you can import them directly into the platform.

  • Download the sample format provided below to ensure your file is structured correctly for import. This sample can also be accessed within the platform.

Sample Test Cases :- Sample_Test_CaseXLS

  • Alternatively, you can manually add a test case for quick mapping.

This flexible approach lets you either:

  • Leverage AI-driven generation for speed and precision, or
  • Maintain full control by importing custom test cases.

After a test case is generated, it will appear under the associated user story.

  • Click on the test case to view its details.
  • You can edit the steps to better match your specific requirements.
note

Including a human review at this stage helps ensure the test cases are accurate and aligned with your application's needs.

TestingAIde also supports the generation of API Test Cases. To enable this:

  1. Provide API Collection: The user must supply the API details by either of the following methods:
    1. Upload an API collection (e.g., Postman collection) via the provided popup dialog.
    2. Enter a Swagger/OpenAPI URL for automatic import.
  2. Importing APIs: Once the collection or URL is submitted, TestingAIde will import and list the available API endpoints.

  1. Generate Test Cases: After import, click on the "Generate Test Cases" button to automatically create relevant API test cases.

This feature ensures comprehensive API testing coverage based on the uploaded specifications.

Execution Engine

The Execution Engine is the core of TestingAIde.

App Configuration

Here, you provide the login details for your web application's test environment.

  • To add a New Configuration , click on “Add New Application”

  • Enter the application URL, username/email, and password for the test account.
  • TestingAIde uses these credentials to log in and analyze your application.
note

If your application undergoes changes, you will need to re-enter the details to ensure accurate execution.

Once your application has been successfully analyzed, navigate to the Script Generation segment. Here, you can use the previously generated test cases to begin creating test scripts tailored to your chosen tools and frameworks.

Follow these steps to generate test scripts in TestingAIde.

Generating Test Scripts

  1. Select test cases

Select from the test case list and pick one or more test cases.

  1. Select tool, language and framework Choose the Automation tool (e.g., Selenium, Playwright), Language (Java, Python, JavaScript/TypeScript, C#), and Choose the framework (TestNG, Pytest, Playwright Test, Behave, Pytest-BDD) TestingAIde currently supports the following combinations:
ToolLanguageFramework
SeleniumJavaTestNG
SeleniumPythonPytest
Behave
Pytest-BDD
PlaywrightTypeScript (Node.js)Playwright Test
PlaywrightJavaTestNG
Playwright Test
PlaywrightPythonPytest
Behave
  1. Once you have chosen your stack, click on “Generate Test Scripts” to begin the script generation process.

    Once the scripts have been generated they will be available in the fock below.

The generated scripts are displayed directly within the application. From the action panel above, you can:

  • Run the scripts
  • Edit or Save changes
  • Download for external use
  • Archive for later reference
  • Delete if no longer needed

API Test Scripts

You can automatically generate and execute API test scripts directly within the platform.

Steps to Generate and Execute API Test Script

  1. Select the Test Case you want to create a script for.
  2. Click on the Generate Script button below.
  3. Once generated, use the inbuilt execution component to run the script.
  4. View the execution results immediately within the same interface.

Results and Tracking

  • Execution results are displayed in real time after each run.
  • Historical trends and performance metrics are available on the Dashboard, enabling you to track stability, success rates, and failures over time.

Page Insights Testing

TestingAide provides a Page Insight Testing feature that allows you to evaluate your application's Performance, Accessibility, SEO, and Best Practices. This helps you quickly identify areas for improvement in your UI.

Steps

  1. Navigate to Page Insight Testing from the menu.
  2. Click Add New.
  3. Enter the URL of the page you want to test.
  4. Click Proceed to start the scan.

The scan will begin, and once completed, a detailed report will be generated covering all the above-mentioned criteria.

Viewing Results

The results page provides comprehensive insights across four categories:

  • Performance
    • Measures page load speed, responsiveness, and overall user experience.
    • Highlights metrics such as First Contentful Paint (FCP), Largest Contentful Paint (LCP), Time to Interactive (TTI), and Total Blocking Time (TBT).
    • Helps identify scripts, images, or network requests slowing down the page.
  • Accessibility
    • Evaluates how usable the page is for all users, including those with disabilities.
    • Checks for color contrast, ARIA labels, screen reader compatibility, and keyboard navigation.
    • Flags missing alt attributes, unlabeled form fields, and other accessibility issues.
  • SEO (Search Engine Optimization)
    • Validates whether the page follows best practices for search engines.
    • Includes checks for meta tags, structured data, mobile-friendliness, and page indexing.
    • Highlights missing or duplicate page titles, descriptions, or canonical tags.
  • Best Practices
    • Ensures the page follows modern security and coding standards.
    • Looks for issues like usage of deprecated APIs, insecure requests (HTTP vs HTTPS), and browser compatibility concerns.
    • Provides suggestions to improve overall maintainability and reliability

Report Interaction

  • Expand each accordion section to view:
    • Category score (0–100 scale).
    • Detailed analysis of strengths and issues.
    • Criteria used for testing along with improvement suggestions.

This structured report ensures testers can pinpoint issues quickly and share actionable findings with developers.

Dynamic Application Security Testing (DAST)

TestingAide also provides Dynamic Application Security Testing (DAST), which allows you to scan running applications for security vulnerabilities in real time. This helps identify potential weaknesses that could be exploited in production environments.

Steps to Run a DAST Scan

  1. Navigate to the DAST tab from the menu.
  2. Click Add New.
  3. Enter the URL of the application or page you want to test.
  4. Start the scan.

Once the scan is completed:

  • On the left-hand side, you will see a list of completed scans. Click on a scan entry to view its report.
  • On the right-hand side, detailed scan results will be displayed. These results include:
    • Identified vulnerabilities detected during the scan.
    • Severity and priority levels (e.g., High, Medium, Low) to help you focus on critical issues first.

Technical details of each finding, such as affected endpoints, parameters, and potential attack vectors.

Static Application Security Testing (SAST)

TestingAide also provides Static Application Security Testing (SAST), which analyzes the source code of your application to detect security vulnerabilities, code quality issues, and maintainability risks before the code is executed. This helps ensure a more secure and reliable codebase.

Steps to Run a SAST Scan

  1. Navigate to the SAST tab from the menu.
  2. Click Add New Repo.
  3. Provide the GitHub repository link of the project you want to scan.
  4. Once the repository is added, it will appear in the list.

  1. Click on the repository entry in the table to open its details.
  2. Press the Start Scan button to initiate a new scan.

Viewing Results

  • Once a scan is completed, it will appear in the left-hand table under the selected repository.
  • Click on the scan entry to view detailed results on the right-hand side.
  • The details include:
    • Identified security issues within the codebase.
    • Severity levels (e.g., Critical, Major, Minor) to help prioritize fixes.
    • Code-level insights, such as the affected files, functions, or lines of code.
    • Remediation guidance to help developers address the findings effectively.

Each scan report can be exported for documentation, auditing, or sharing with stakeholders. The platform also shows overall trends over time, helping you track improvements or recurring issues across multiple scans.

Performance Test

From your previously uploaded APIs,

Mobile Testing

TestingAide also supports mobile application testing for Android apps. The process is similar to the earlier application configuration steps.

Configure the App

  1. Navigate to the Mobile App Configuration section.
  2. Click Add NewApplication.

  1. Provide the required app details and credentials.

Generate Test Scripts

  1. Go to the Script Generation section.
  2. Select the desired programming language and relevant test cases.

  1. Click Generate Test Scripts.

The generated scripts will appear in the dock below.

Execute & Monitor Tests

  • Use the Run option in the dock to execute the test.
  • A live emulator is available alongside the dock, allowing you to observe the test execution in real time.
  • You can also Download the scripts for offline use.

IVR Testing

TestingAide allows you to configure and test Interactive Voice Response (IVR) flows seamlessly. To add a flow

  • Navigate to the IVR Configuration section.
  • Click on Add New Configuration.

Provide the following details:

  • IVR Number – The IVR endpoint you want to test.
  • Caller Number – The number you want to test from.
  • Twilio Account ID – Your Twilio account identifier.
  • Twilio API Auth Token – Authentication token for secure access.

Once saved, your IVR configuration is ready. To generate the test scripts.

  1. Go to the Script Generation panel.
  2. Select the relevant test cases.
  3. Click on Generate.

The generated scripts will appear in the dock below. The header will allow you to

  • Edit – Modify the generated scripts to match your requirements.
  • Download – Save the scripts locally for execution outside the platform.
  • Delete – Remove scripts that are no longer needed.

DashBoard

The Dashboard provides a centralized view to monitor key project metrics across the platform. It allows you to:

  • Track the count of datasets, test cases, and test scripts.
  • View metrics and trends across all projects.
  • Monitor token usage to manage resource consumption.
  • Review execution metrics, including successful and failed runs.
  • Analyze performance test results within each project.

This consolidated view helps testers and stakeholders quickly assess project health, resource usage, and overall testing effectiveness.

Test Data generator

The Test Data Generator allows you to create synthetic test data that can be used in your test cases.

How it Works

  1. Upload the Schema – Provide the data schema you want to use.
  2. Infer the Schema – The platform automatically interprets the schema structure.
  3. Generate Data – Specify the variance (to introduce diversity) and the quantity of records you need.

This lets you quickly create realistic, but fake data for testing without exposing sensitive production data. You can scale test coverage by generating large volumes of data on demand.

tip

💡 This feature is especially useful when you need sample data for new test cases, performance testing, or validating workflows in a safe and controlled manner.