Meet the team

Building regulator-grade AI accountability

We design deterministic, replayable evaluations and evidence artifacts for HR AI and other high-risk AI systems — so decisions hold up under audits, works-council scrutiny, litigation, and procurement review.

Builders, not slideware. You work directly with the people who design the scenarios, run the tests, and ship the evidence.

MV

Mano Venkatesan

Founder & CEO

Leads Aram Algorithm's executive strategy and delivery of regulator‑grade AI accountability, ensuring organizations can defend high‑risk AI decisions under regulatory, legal, and stakeholder scrutiny.

Expertise

  • AI accountability strategy & executive governance
  • HR AI and employment systems oversight (EU AI Act Annex III‑4)
  • Deterministic evaluation and evidence‑based assurance
  • Enterprise technology and systems leadership

Executive leader with 22+ years of experience in cloud platforms, large‑scale systems integration, and enterprise transformation. Mano founded Aram Algorithm to address a critical gap between AI deployment and defensible accountability, focusing on decision replayability, audit‑ready evidence, and regulator‑literate governance. He completed the AI Alignment Safety Fundamentals program with BlueDot in February 2025.

Talk to me about: Executive accountability strategy, AI governance readiness, regulator and works‑council scrutiny, pilot programs for HR AI systems

LinkedIn →
YK

Yaswanth Kumar A

AI Research Associate

Supports regulator‑grade AI accountability work through research, evaluation support, and evidence preparation.

Expertise

  • AI/ML fundamentals
  • Evaluation support & test execution
  • Structured evidence preparation

Early‑career AI practitioner with hands‑on experience supporting model evaluation, documentation, and reproducibility‑focused workflows.

Talk to me about: Evaluation runs, test data preparation, experiment logging

LinkedIn →
PH

Priyanka H A

AI Red‑Team Engineer

Executes scenario‑based AI red‑teaming to surface failure modes, robustness gaps, and accountability risks in high‑risk AI systems.

Expertise

  • AI red‑teaming & adversarial testing
  • LLM behavior evaluation
  • Safety, robustness, and compliance‑oriented testing

AI practitioner focused on red‑team methodologies, model behavior analysis, and structured testing workflows aligned with safety and regulatory expectations.

Talk to me about: Adversarial scenarios, failure‑mode discovery, evaluation design

LinkedIn →
UL

Uma Lakshmi

Sales & Marketing Engineer

Bridges AI evaluation and red‑teaming outputs with client communication, ensuring accountability evidence is clearly understood and correctly positioned for stakeholders.

Expertise

  • Technical sales & solution positioning
  • AI red‑teaming and compliance communication
  • Translating evaluation evidence for business, legal, and HR audiences

Sales & Marketing Engineer working closely with AI safety and red‑teaming teams to support adoption, explain findings, and align technical accountability artifacts with client needs.

Talk to me about: Communicating evaluation results, stakeholder alignment, go‑to‑market for AI accountability

LinkedIn →
JG

Jebrin G

AI Red‑Team Engineer (Accountability Records)

Creates regulator‑grade AI Accountability Records by translating red‑team evaluation results into structured, replayable evidence artifacts across engineering and go‑to‑market workflows.

Expertise

  • AI accountability record creation (logs, traces, evidence packs)
  • Red‑team evaluation support & scenario execution
  • Engineering ↔ sales/marketing liaison for accountability evidence

Works at the intersection of AI engineering, red‑team testing, and stakeholder communication, ensuring evaluation outputs are captured, structured, and represented accurately without distortion.

Talk to me about: Accountability records, evidence translation, red‑team findings handoff

LinkedIn →
YT

Yemmie Trespeces

Sales & Marketing Specialist

Supports client and stakeholder engagement by ensuring AI accountability outputs and red‑team findings are communicated clearly, accurately, and grounded in real HR decision‑making contexts.

Expertise

  • HR operations & people‑process understanding
  • Client engagement & coordination
  • Accountability evidence presentation support
  • Translating technical evaluation artefacts into HR and business context

HR‑trained professional with experience in people operations and stakeholder coordination, now supporting AI accountability and red‑teaming teams by anchoring technical findings in practical HR workflows and employment realities.

Talk to me about: HR process alignment, evidence presentation for HR leaders, stakeholder coordination across people teams

LinkedIn →
Talk to us