Optimiza logo

QA Engineer - IoT, Automation, AI, Platform, System

Optimiza
Full-time
On-site
Amman, Amman Governorate, Jordan
QA Engineer
Description

Location: Jordan

The Opportunity

Reporting to the Project Manager or QA Lead, you will evaluate complex, distributed systems—including IoT platforms, AI pipelines, and cloud-native architectures—to ensure robust performance, functional accuracy, security, and scalability. You will drive test automation, promote proactive QA engineering, and implement modern practices to uphold system-level reliability and integration integrity.

Key Responsibilities

-        Design and execute both manual and automated test cases targeting IoT devices, edge-to-cloud systems, AI models, and platform services

-        Influence requirements and engineering practices to reduce defect density and improve testability of AI-integrated and sensor-driven applications

-        Identify and troubleshoot cross-layer issues (device, edge, backend, AI inference) using systematic diagnostics and logging tools

-        Automate regression, integration, and system tests using Selenium, Pytest, Postman, Cucumber, or similar frameworks

-        Develop test frameworks supporting device emulation, sensor data simulation, and model prediction validation

-        Use tools such as JMeter, SOAPUI, Postman, Locust, or Gatling to assess platform performance and scalability

-        Perform full-stack QA—including API testing, message queue validation (MQTT/AMQP), cloud service verification, and model output analysis

-        Conduct non-functional testing such as latency benchmarking, fault tolerance (e.g., network disconnection), and AI model response under load

-        Build and maintain test automation pipelines integrated into CI/CD systems (e.g., Jenkins, GitHub Actions, GitLab CI)

-        Own end-to-end test efforts, including test strategy definition, execution, reporting, traceability, and feedback to development

-        Participate in agile ceremonies (sprint planning, retrospectives, backlog grooming) to provide test estimates and QA-focused risk assessments

-        Drive continuous improvement by experimenting with new QA tools for device telemetry testing, AI confidence scoring, and automation frameworks

-        Collaborate closely with embedded engineers, backend developers, data scientists, DevOps, and product teams to ensure system-wide quality

Comply with QHSE (Quality Health Safety and Environment), Business Continuity, Information Security, Privacy, Risk, Compliance Management and Governance of Organizations policies, procedures, plans, and related risk assessments.

Skills and Attributes for Success

-        Test automation coverage and reliability

-        Device-to-platform integration validation

-        Defect rates in production vs QA

-        Reduction in manual regression cycles

-        Adherence to test standards and secure testing practices

-        Uptime/latency impact testing of integrated AI/IoT systems

 



Requirements

requirements:

-        Bachelor’s or Master’s in Computer Science, Embedded Systems, Information Technology, or equivalent experience

-        5–7+ years of QA experience in software testing (manual + automation), with a focus on IoT platforms, cloud systems, or AI pipelines

-        Strong background in test automation with tools like Selenium, Pytest, REST-assured, or Cucumber

-        Hands-on experience testing distributed architectures, REST APIs, message brokers (MQTT/Kafka), and cloud-based microservices

-        Familiarity with embedded systems, sensor data streams, device simulation/emulation, or IoT gateways

-        Experience with CI/CD integration, automated test triggers, and code-to-test mapping

-        Good understanding of network protocols, system architecture, and cloud infrastructure (AWS/GCP/Azure)

-        Solid experience in SQL or NoSQL for data validation and analytics layer testing

Preferred Skills

·        Exposure to AI/ML model testing, including inference behavior, response times, and false positive/negative checks

·        Experience in BDD (Behavior Driven Development) for complex, domain-driven test definitions

·        Knowledge of platform observability tools (Grafana, Prometheus, Kibana) to correlate test outcomes with system logs and metrics

·        Experience in performance, security, and risk-based testing for connected systems

·        Knowledge of test virtualization and mocking tools for sensor or AI inputs

·        Excellent communication and collaboration skills, particularly in cross-disciplinary teams involving hardware, software, and data

·        Fluent English and Arabic



Benefits

Class A Medical Insurance