Senior Software Code Reviewer
About Vetto
Vetto is a tech company focused on building and scaling high-quality datasets for artificial intelligence systems. We work at the intersection of human expertise and AI, ensuring that models are trained on technically accurate, well-defined, and realistic data.
Our projects support the training and evaluation of Large Language Models (LLMs), where technical rigor and correctness are non-negotiable.
About the Project
This project focuses on the technical review and validation of coding tasks used to train AI models.
Automated code is generated in response to software engineering prompts, and your role is to evaluate whether that code is correct and truly solves what was asked.
The core questions you will be answering on every task:
- Is the coding task technically well-defined?
- Does the generated code actually solve the problem?
- Are the associated tests robust, correct, and aligned with real-world software behavior?
Tests are treated as the mechanism of truth in this context. Mistakes here propagate at scale into AI systems, error criticality is high.
Languages
Tasks in this project involve code written across multiple languages. You will be expected to review and evaluate tasks in any of the following: Python, JavaScript / TypeScript, Go, Rust, and Java.
Strong command of at least two of these languages is required. Breadth across languages is a plus.
Responsibilities
- Review and analyze generated code against the original software engineering prompt
- Evaluate whether the coding task itself is clearly and correctly defined
- Validate whether tests accurately reflect whether the problem has been solved
- Identify gaps, ambiguities, false positives, and false negatives in test suites
- Determine whether a solution that passes the tests genuinely solves the underlying problem
- Apply strict technical criteria and quality standards consistently across tasks
Required Profile
This role is designed for mid/senior-level software engineers with real professional experience.
Technical requirements
- Proven professional experience in software development (production environments)
- Strong command of at least two of the listed languages
- Experience reviewing and evaluating code written by other engineers
- Solid understanding of automated testing — how tests validate (or fail to validate) behavior
- Experience contributing to or working with open source projects
- High attention to detail and strong technical judgment
- Comfortable working fully in English (reading and writing)
Nice to have
- Experience with test-driven development (TDD) or test design
- Familiarity with large or complex codebases
- Background in AI, ML, or data-centric projects
Project Details
This is expert, task-based technical work focused on analysis, validation, and judgment — not code production. Each task takes approximately 30 minutes. Tasks are reviewed under continuous QA and calibration.
Compensation is in the range of $100 per hour (task-equivalent reference), varying based on task complexity and approved volume.
Selection Process
The selection process is fully asynchronous and based on your application. There are no interviews: we evaluate candidates through their background, screening responses, and a short technical exercise focused on code review and test validation.
No traditional interviews required.
Final Note
This role is not about writing more code. It is about technical judgment, rigor, and responsibility.
If you are comfortable challenging problem definitions, questioning tests that "pass but are wrong", and acting as a technical quality gate, this project is for you!
Apply for this job
*
indicates a required field