Case Study

Therabits: AI-Generated Simulation and the Development of Psychology Training At Scale

Digitised Training Workflow

Increased therapy sessions with controlled, repeatable, risk-free simulations

GDPR-Compliance

Full control over data retention, access policies, and system availability

Client's Pain Point

In traditional psychology education, students acquire hands-on clinical experience by sitting in on supervised therapy sessions with actual patients. Through this model, students learn by observing and eventually participating in live therapy sessions under supervision. However, the model cannot scale to meet the volume of practice required to develop genuine clinical competence. Its limitations are observed at several levels:

  • the number of sessions a student can take is constrained by scheduling and supervision availability;

  • patient consent imposes legitimate limits on student involvement and;

  • the risk of harm from an inexperienced practitioner, even under supervision, is non-trivial.

More critically, it offers minimal opportunity for students to practise responding to a patient themselves, as doing so prematurely, before a student has developed sufficient clinical judgement, poses a genuine risk of psychological harm to the patient. Even poorly chosen words from an inexperienced student can cause damage in a therapeutic context.

The university sought to resolve this constraint by significantly increasing the volume and variety of practice scenarios available to students, by removing any risk to real patients. Therabits addresses this by replacing the live patient with an AI-generated avatar that presents realistic mental health scenarios in pre-rendered video form. Students engage with the simulation as they would in a real session, record their responses, and receive structured feedback from their supervisor.

Therabits serves 3 distinct user groups. 

  1. The first is authors: authors craft the prompts to be used for generating AI-avatars. 

  2. The second is supervisors: supervisors create the training content, configure learning paths, and evaluate student performance. 

  3. The third is students: psychology students / trainees work through the exercises as part of their training curriculum. 

Worth noting:

The application is not a consumer product and is not designed for public or commercial use. It is an institutional experimental tool built to support a specific, accredited training programme.

Besides its training function, the project also carries a research dimension: the university is conducting a formal study to evaluate whether AI-generated simulation constitutes a pedagogically effective and clinically sound method of training psychology students. This web-based application is both the instrument of that study and, to some extent, its subject.

How does the solution work ?

3 sequential workflows are observed: prompts creation by authors, content compilation by supervisors, and training exercises taken by students.

Author/Supervisor Workflow: Prompts / Content Creation


An author logs into the application and scrolls to the Prompts section, where they author a scenario script. The script is written in the first person, as if spoken by a patient describing their mental health situation. It could be different scenarios: a bereavement, a relationship breakdown, a loss of purpose, or more severe presentations involving self-harm or acute crisis. The supervisor takes over, and then proceeds by selecting the avatar’s visual appearance and voice from the available configurations. This complete submission is then transmitted to the external AI video generation service (Synthesia), which renders a photorealistic video of a digital human avatar delivering the script.

The video generation process is asynchronous. When rendering is complete, Synthesia notifies the application via a webhook, and the application automatically downloads the generated video. The supervisor can then play back the video within the application to assess its clinical realism and suitability. If it meets the required standard, it is approved and becomes available for use in training. If it is rejected, it can be revised and resubmitted. The application interface surfaces the full status of every video asset: in progress, completed, failed, restarted, or rejected by the provider, giving supervisors complete visibility over the content pipeline.


2 video types are generated for each scenario. 

  1. The first is the scenario video itself: the avatar’s spoken presentation of their mental health situation.

  2. The second is a placeholder active-listening video (idle video): a short loop of the avatar nodding attentively, which plays continuously while the student records their response. This placeholder preserves the psychological realism of the simulated session, ensuring the student is not responding to a static screen.

Once a sufficient set of approved videos exists, the supervisor then assembles a learning path: a curated, ordered sequence of scenario prompts grouped by clinical complexity (easy to hard). The learning paths range from entry-level presentations to advanced scenarios involving severe mental health conditions.

Student Workflow: Training Exercise

Assigned to a student, is an exercise where the former selects a learning path and begins the exercise. The application renders a simulated video call interface. The scenario video plays, presenting the AI avatar’s clinical situation. When the video concludes, the active-listening placeholder (idle video) begins to loop, and the student’s verbal response is recorded. The recording is stored in the application for later review. After each prompt, the student progresses through the learning path in sequence.

Upon completing all prompts, the student completes a self-assessment challenge rating (easy, medium, hard). This is a reflection on their performance with time-stamped annotations, and clinical decision-making during the exercise. The completed learning path, including all recordings and the self-assessment, is then made available to the supervising professor for evaluation. Upon satisfactory completion, the student can download a PDF certificate of completion.

There is no dynamic or generative AI component during the student session. All content is pre-rendered and fixed. That means, there is no real-time LLM processing, no dynamic avatar response, and no AI evaluation of the student’s performance during the exercise. 

How was the solution developed ?

Development Approach

Laramate GmbH, we value outcomes.

Our clients pay for delivered services. Development was milestone-based, consistent with our standard project methodology.

— Laramate GmbH

A notable feature of this engagement was the client’s requirement for continuous code access. At the completion of each milestone, the updated codebase was pushed to the university’s own repository. This arrangement allowed the university’s in-house IT staff to review the code progressively, ask questions, and verify that each phase of development met the specified requirements before work proceeded to the next milestone. This preserved the integrity of the development branch and avoided merge conflicts, which is a pattern worth establishing formally at the outset of similar engagements.

We are pro-client independence. Our policy against client lock-in is vehement. No proprietary control over a client’s codebase. This is part of our main operating principles.

— Laramate GmbH

In an institutional context such as this, where the university must be able to maintain, adjust, and potentially extend the application independently over the duration of a multi-year study, that transparency is not merely a commercial stance but a functional requirement that benefits both parties. 

Plus, we further supported the university by educating them on the secure configuration of their self-hosted environment, a necessary step given the institution’s stringent data protection and compliance obligations.

Development Challenges

Sensitive Content Restrictions on Commercial Video Generation APIs

A concerning challenge was at the level of the content requirements of the training material. Training psychology students requires managing patients in crisis, including individuals presenting with self-harm, suicidal thoughts, or severe trauma, which are legitimate clinical needs. However, generating video content on these topics is prohibited under the terms of service of virtually all commercial AI video generation platforms. Providers routinely refuse such requests and may suspend or permanently block accounts that submit them.

This created a direct conflict between the clinical scope of the application and the operational constraints of available API providers. The application, hence, cannot fulfill its purpose without the ability to generate content on these sensitive topics, yet no single provider can be relied upon to permit it indefinitely.

To address this conflict, rather than accepting a dependency on any single provider’s policies, the application is built around a universal, documented provider interface. That means, the API integration layer abstracts the connection to the video generation service such that substituting a new provider, whether one operating under a different content policy or one with a commercial arrangement that permits clinical content, requires implementing the defined interface only, rather than reconstructing the integration from the ground up.

PDF Certificate Generation Without a Headless Browser

The second challenge arose from the client’s server maintenance constraints. Our standard approach to PDF generation deploys Chrome in headless mode, which provides comprehensive CSS support, enabling precise control over layout, typography, breakpoints, and page composition, consistent with modern document design standards. However, running headless Chrome on a web server requires installation and ongoing maintenance.

The constraint would require the use of an alternative PHP-based PDF generation library. This library’s CSS support is significantly more restricted, ruling out several styling techniques that would otherwise have been available. Therefore, producing a certificate template that meets the required visual standard within those constraints would demand additional development effort. Nonetheless, to solve that, the outcome was preserved at the necessary quality, with a deliberate trade-off - reduced long-term maintenance burden on the client in exchange for increased build-time complexity on the development side.

Maintaining GDPR Compliance Within an API-Dependent Architecture


The application records students’ video feeds / responses during training sessions. Given that these are identifiable individuals and that the recordings collect sensitive performance data in an academic context, GDPR compliance was a non-negotiable requirement. No recorded student data is transmitted to or stored by any external service, including the video generation API, despite the application’s dependence on external API which only abstracts content production. This means that only supervisor-authored scripts and configuration data are transmitted to the external API. All student recordings are stored exclusively on the university’s own servers.

Key Features & Functionality

Prompt Management Module

Authors create scenario scripts through a dashboard. On this interface, each prompt is written as first-person dialogue to be delivered by the AI-generated avatar, describing a specific mental health situation. Alongside the script, the supervisor then takes over, proceeds by selecting the avatar’s visual appearance and voice from the available options offered by the video generation API.

Video Generation Pipeline with Status Update

Upon submission, the application tracks each video asset through its full production lifecycle. Supervisors can see in real time (webhook) whether a video is in progress, has completed successfully, has encountered a failure, has been restarted, or has been rejected by the provider, which includes rejections triggered by content policy violations.

Webhook-Driven Automated Video Retrieval

The application integrates with the video generation provider - Synthesia, via a webhook endpoint. When rendering is complete, the provider sends a notification to the application, which automatically downloads the completed video and stores it locally. 

CLI Command Suite for API Management

A dedicated set of Artisan CLI commands was developed to manage the webhook integration with Synthesia, enabling administrators to create, update, and remove webhook endpoint configurations without accessing the provider’s dashboard directly. These commands are used during initial deployment and when integration parameters need to be updated.

Supervisor Audit Interface

Supervisors review generated videos within the application, configure avatar appearance and voice, and submit content to the video generation API. It can either be approved or rejected which allows for better management of the content library.

The content library management consists of a learning path builder. It allows supervisors to assemble approved scenario videos into ordered, levelled learning paths, configuring difficulty progression from easy to advanced clinical scenarios to calibrate the student’s training experience for different stages.

Simulated Video Call - Student Exercise Interface

The student, when assigned to an exercise, is directed to a dashboard with a simulated video call environment wherein the student takes the training session. This interface manages sequential playback of AI-generated patient scenarios (fixed) and placeholder videos (idle video), where once initiated (time bound), stores the student’s response recordings, and routes the student through the full learning path - as per difficulty levels.

Self-assessment Questionnaire

Following the completion of a learning path, the student completes a structured self-assessment questionnaire starting with a challenge rating. This provides the student with a reflective evaluation of their own performance; their clinical reasoning, communication approach, and comfort with the scenario, which forms part of the data made available to the supervising professor. 

Supervisor Evaluation Dashboard

Provides the supervising professor with access to student recordings, self-assessments, and completion status across all learning paths including all annotations made by the student. This can be vice-versa., enabling evidence-based feedback. 

PDF Certificate Generation

Upon completing a learning path, a downloadable PDF certificate is generated programmatically within the application using a server-side PDF generation library.

API Integration Layer

A documented, universal provider interface abstracts the connection to the video generation service. Currently integrated with Synthesia, the interface is designed for extensibility: an alternative provider can be substituted with minimal implementation effort, whether necessitated by content policy restrictions, service discontinuation, or the availability of a provider better suited to clinical content.

Your concern, our priority

Do you have a project? Let's talk about it

  • We value outcomes and work milestone-based
  • A successful project is a two-way traffic
  • Full client independence - no vendor lock-in
  • Free initial consultation within 24 hours

Your contact person

Chris Wolf, CEO Laramate GmbH
Chris Wolf
Managing Director, Senior PHP Developer