About arc42
arc42, the template for documentation of software and system architecture.
Template Version 8.2 EN. (based upon AsciiDoc version), January 2023
Created, maintained and © by Dr. Peter Hruschka, Dr. Gernot Starke and contributors. See https://arc42.org.
1. Introduction and Goals
1.1. Context and Motivation
WIChat is a web-based question-and-answer application where users answer a series of questions from various categories within a set time limit. The platform automatically generates questions based on Wikidata and allows users to receive hints through an external language model (LLM). This functionality adds a conversational component to the game, enhancing the user experience.
RTVE has contracted ChattySw to update an experimental version of the online contest previously developed by HappySw, incorporating new interactive hint functionalities and improving the gameplay experience.
1.2. Key Requirements
The system must meet the following requirements:
-
A web application accessible from any browser.
-
User registration and authentication.
-
Automatic question generation based on Wikidata.
-
The ability to obtain hints generated by a language model (LLM) via an API.
-
Validation and mitigation of incorrect responses from the language model.
-
Time control for answering each question.
-
A documented API for accessing questions and user data.
-
Automatic generation of correct and incorrect answers (distractors).
1.3. Quality Objectives
The following quality objectives will guide architectural decisions:
Objective | Priority | Description |
---|---|---|
Scalability |
High |
The system must support a growing number of users without affecting performance. |
Availability |
High |
The application must be available at all times with minimal downtime. |
Security |
High |
Protection of user data and validation of responses generated by the LLM. |
Usability |
Medium |
Intuitive interface and smooth user experience. |
Maintainability |
Medium |
Modular and well-documented code to facilitate future improvements. |
1.4. Stakeholders
The following stakeholders are involved in the development and use of the system:
Role | Contact | Expectations |
---|---|---|
Client (RTVE) |
Ensure that the application meets contract requirements. |
|
Development Team (ChattySw) |
Implement the system according to quality objectives. |
|
Users |
Registered in the application |
Access an interactive and seamless gaming experience. |
2. Constraints
2.1. Technical Constraints
The following technical constraints affect the development of WIChat:
-
Web Technology: The application must be developed using modern web technologies (React.js for the frontend and Node.js/Python for the backend).
-
Database: PostgreSQL or MongoDB will be used for data storage.
-
Language Model: A language model (LLM) will be integrated through an external API.
-
Question Source: Questions must be automatically generated based on Wikidata data.
-
Response Time: Answers to questions must be recorded within a set time limit.
2.2. Organizational Constraints
The following organizational constraints affect the project:
-
Delivery Deadlines: The application must be operational before the project closure as agreed.
-
Documentation: The architecture and development must be documented following the arc42 standard.
-
Open Source: The source code must be hosted in an accessible repository for review and tracking.
2.3. Security and Privacy Constraints
To ensure user security and privacy, the following constraints are established:
-
Data Management: User data must be protected in compliance with data protection regulations.
-
Response Validation: Potential errors or "hallucinations" from the language model must be mitigated to prevent incorrect information in hints.
These constraints will define the boundaries within which WIChat will be designed and developed.
3. Context and Scope
3.1. Business Context

3.2. Technical Context

4. Solution Strategy
4.1. Organizational Decisions
-
Adopting a Kanban-style project management: The team employs a Kanban approach to organize and visualize the software development process. Tasks are represented on a Kanban board, facilitating prioritization and fostering team-wide transparency. This methodology enhances focus by limiting work in progress (WIP), promoting continuous delivery and helping the team identify process bottlenecks early.
-
Using GitHub Flow as the development workflow: GitHub Flow is adopted to ensure a clear and collaborative development process. It encourages the use of short-lived feature branches and enforces Pull Request-based code reviews. This ensures that every change is peer-reviewed before integration, increasing stability and maintaining consistent code quality.
-
Continuous Integration with GitHub Actions and SonarQube: A CI pipeline is implemented using GitHub Actions to automatically run tests and code quality checks on every push. SonarQube is integrated to analyze code for bugs, vulnerabilities, code smells, and duplications. These tools provide immediate feedback, ensuring issues are addressed before deployment and reinforcing best practices.
4.2. Technical Decisions
-
Starting the project from scratch with custom configuration: While a reference project exists, the development team opted to build the system from the ground up. This decision fosters a deeper understanding of the architecture, eliminates dependencies on legacy code, and enables customization aligned with current needs. The prior implementation serves strictly as a conceptual guide.
-
Using MongoDB as the database: MongoDB, a document-oriented NoSQL database, was selected for its flexibility, scalability, and alignment with the application’s data model. Its familiarity within the team and prior successful usage further reinforce this choice.
-
Using Gemini LLM for response generation: After testing several large language models, Gemini was selected due to its superior performance in generating relevant and context-aware responses to user prompts. Its integration facilitates intelligent, adaptive behavior in the application’s core logic.
-
Using JavaScript as the primary programming language: JavaScript is chosen for its ubiquity in web development, extensive ecosystem, and strong support for full-stack development via Node.js. The team’s familiarity with the language ensures faster development cycles and reduces onboarding time.
-
Using Node.js as the backend environment: Node.js is used to build the server-side logic due to its asynchronous event-driven architecture and seamless integration with JavaScript. It is well-suited for handling concurrent requests in real-time applications such as quizzes.
-
Using Material UI as the UI framework: Material UI accelerates front-end development by offering a comprehensive set of customizable, accessible components. It ensures visual consistency while reducing the amount of custom styling and layout logic needed.
-
Using Docker for service deployment: Docker is employed to containerize application components, enabling environment consistency, isolated testing, easier scaling, and streamlined deployment pipelines.
-
Using Azure as the cloud platform: Microsoft Azure is selected for its robust cloud services, seamless DevOps integration, enterprise-level security, and global infrastructure. Azure’s CI/CD support and monitoring tools align well with the project’s operational requirements.
-
Using a Linux-based server environment: Linux is chosen for hosting the application due to its reliability, performance efficiency, and compatibility with modern development and automation tools.
-
Separation of game logic and UI components: To ensure maintainability and testability, core game functionality (e.g., score tracking, validation logic, question selection) is decoupled from presentation components. This separation enforces single responsibility and supports future extensibility.
-
Using
setInterval
to manage game timers: Timed gameplay requires precise countdown control. The use ofsetInterval
with timestamp drift correction enables consistent and reliable timer behavior, even under varying load conditions. -
Anonymous play support (no login required): Users can access and play quizzes without authentication. However, to preserve data integrity, only authenticated users have access to statistics, history, and profile management. This improves accessibility without compromising personalization for logged-in users.
-
User profile editing and password update: Authenticated users are allowed to manage personal details and change their passwords through secure interfaces. Proper validation and feedback mechanisms are integrated to ensure usability and security.
-
Using bcrypt to protect user passwords: All stored passwords are hashed using bcrypt, which provides built-in salting and adaptive hashing to resist brute-force and rainbow table attacks. This aligns with industry best practices for secure authentication.
-
Game modes based on thematic categories: To enhance user engagement, the application supports multiple quiz modes including Countries, Music, Cinema, and Mixed Mode. This allows users to tailor their experience based on personal interests.
-
Daily question generation via cron + Wikidata: An automated cron job fetches and generates new quiz questions daily from Wikidata. This ensures the application remains dynamic, up-to-date, and free from manual content entry requirements.
-
Using Swagger for API documentation: Swagger (OpenAPI) is integrated to document all API endpoints interactively. Developers can view, test, and understand the API via a visual interface, which streamlines both internal development and third-party integration efforts.
4.3. Quality and Maintainability Decisions
-
Using GitHub Flow and Pull Requests for code review: Pull Requests are enforced across all feature and bug-fix branches. Each submission is subject to peer review, ensuring that standards are met, potential defects are caught early, and collaboration remains transparent.
-
Using Jest for unit testing: Jest is used to test application logic at the unit level. It offers fast test execution, easy mocking, and snapshot testing, making it well-suited for JavaScript-based projects.
-
Using jest-cucumber for BDD-style testing: Test scenarios are written in Gherkin syntax using
jest-cucumber
, improving test readability and enabling non-developers to participate in test definition. This supports behavior-driven development (BDD) and improves communication across technical and business stakeholders. -
Using pepeter for React component testing: Pepeter is used to create expressive, readable tests for React components. It simplifies interaction simulation and state validation, allowing for deeper and more maintainable component-level testing.
-
Using Cypress for end-to-end testing: Cypress is adopted for testing complete user flows, such as answering questions, submitting results, and navigating through different parts of the application. This ensures high confidence in the stability of the application post-deployment.
-
Using Ruby for documentation deployment: Ruby is selected as the documentation deployment environment based on tooling compatibility. Existing scripts and setup guides are designed for Ruby, minimizing setup overhead and ensuring smooth integration with the documentation pipeline.
-
Modular and testable code structure: The codebase is designed following modular principles, where each component or module fulfills a single, well-defined responsibility. This approach promotes testability, simplifies debugging, and improves long-term maintainability.
5. Building Block View
This view shows the static decomposition of the WIChat system into its main building blocks (components, services) and outlines their responsibilities and primary relationships.
5.1. High-Level System Overview (Context Recap)
The following diagram shows the WIChat system as a whole (blackbox), interacting with the primary user and external data/service providers (Wikidata, LLM).

5.2. Whitebox Overall System (Level 1)
This section describes the top-level decomposition of the WIChat trivia game system into its internal microservices and components.
5.2.1. Overview Diagram (Level 1)
The following diagram shows the main internal building blocks (Level 1) of the WIChat system and their primary interactions.

5.2.2. Motivation
The WIChat system is decomposed into microservices (Level 1) to:
-
Separation of Concerns: UI (
WebApp
), routing/aggregation (Gateway Service
), user management (Users Service
), authentication (Auth Service
), question logic (Question Service
), history/statistics (History Service
), interaction/cache with Wikidata (Wikidata Service
), AI suggestions (LLM Service
). -
Independent Scalability: Scale each service based on demand (e.g.,
Question Service
during gameplay,Auth Service
during peak logins). -
Technological Flexibility: Allows different technologies per service and independent upgrades.
-
Maintainability & Testability: Smaller services are easier to manage, test, and deploy.
-
Resilience: Failures in non-critical services (e.g.,
LLM Service
) have minimal impact on core gameplay.
5.2.3. Level 1 Building Blocks (Blackboxes)
WebApp
(Frontend)
- Responsibility
-
Provides the interactive web interface. Displays questions, answers, images, score, statistics, and user profile. Communicates only with the
Gateway Service
. - Interfaces (Consumed)
-
Gateway API (REST/WebSocket).
Gateway Service
- Responsibility
-
Single entry point. Routes requests to microservices (
Auth
,Users
,Questions
,History
,LLM
,Wikidata
), handles CORS, basic validation, exposes Swagger and metrics (Prometheus), checks downstream services’ health. - Interfaces
-
-
Provided: Gateway API (REST),
/metrics
,/health
,/api-doc
. -
Consumed: Auth, Users, Question, History, LLM, Wikidata APIs.
-
- Technology
-
Node.js, Express, Axios.
Auth Service
- Responsibility
-
User authentication (validation and JWT issuance).
- Interfaces
-
-
Provided:
/login
(REST). -
Consumed: Users Service via Gateway or direct database access.
-
- Technology
-
Node.js/Express.
Users Service
- Responsibility
-
CRUD operations for users, avatar management.
- Interfaces
-
-
Provided:
/addUser
,/user/{id}
(REST). -
Consumed: Database.
-
- Technology
-
Node.js/Express.
Question Service
- Responsibility
-
Storage and retrieval of generated questions.
- Interfaces
-
-
Provided:
/addQuestion
,/questions
(REST). -
Consumed: Database.
-
History Service
- Responsibility
-
Persists game history and calculates aggregated statistics.
- Interfaces
-
-
Provided:
/addGame
,/stats
,/getBestGames
,/getAllGames
. -
Consumed: Database (Mongoose).
-
- Technology
-
Node.js, Express, Mongoose.
Wikidata Service
- Responsibility
-
Facade and cache for Wikidata. Queries SPARQL, processes, and caches data.
- Interfaces
-
-
Provided:
/api/entries/{…}
. -
Consumed: Wikidata SPARQL endpoint and database (Mongoose).
-
LLM Service
(Hint Service)
- Responsibility
-
Orchestrates question and hint generation. Fetches base data from Wikidata, calls external LLM, formats, and persists questions.
- Interfaces
-
-
Provided:
/generateQuestions
,/getHint
,/getHintWithQuery
. -
Consumed: Gateway → Wikidata Service, external LLM API, Gateway → Question Service.
-
- Technology
-
Node.js, Express, Axios, @google/genai.
Database
- Responsibility
-
Persistent storage for users, history, questions, and Wikidata cache.
- Interfaces
-
MongoDB driver consumed by services.
- Technology
-
MongoDB.
5.3. Important Interfaces (Summary)
Summary of key interfaces.
Interface Name | Description | Provided By | Consumed By |
---|---|---|---|
Gateway API (REST) |
|
Gateway Service |
WebApp, LLM Service, Users Service |
Auth Service API |
|
Auth Service |
Gateway Service |
Users Service API |
|
Users Service |
Gateway Service |
Question Service API |
|
Question Service |
Gateway Service |
History Service API |
|
History Service |
Gateway Service |
Wikidata Service API |
|
Wikidata Service |
Gateway Service |
LLM Service API |
|
LLM Service |
Gateway Service |
Database Access |
|
Database |
Auth Service, Users Service, Question Service, History Service, Wikidata Service, LLM Service |
Wikidata SPARQL |
|
Wikidata (External) |
Wikidata Service |
External LLM API |
|
LLM Provider |
LLM Service |
5.4. Level 2 (Refinements)
5.4.1. White Box LLM Service (Hint Service)
Motivation (LLM Service Focus)
This service encapsulates the complex logic of interacting with external providers (LLM, Wikidata Service) and coordinates multiple steps to generate questions and hints.
Internal Logic Flow / Responsibilities
Question Generation Orchestration (/generateQuestions
endpoint)

-
Receives category and number of questions from the Gateway Service.
-
Requests base data (including
imageUrl
) from the Wikidata Service via Gateway. -
For each entry:
-
Formats textual information (
formatEntryInfo
). -
Constructs a detailed prompt for the external LLM.
-
Calls the LLM API (
sendQuestionToLLM
). -
Parses and validates the JSON response (
parseJsonResponse
), retrying if needed. -
Combines generated text with the
imageUrl
. -
Persists the question via the
/addQuestion
endpoint on the Gateway.
-
-
Aggregates all generated questions and returns them to the Gateway Service.
Hint Generation (/getHint
endpoint)

-
Receives the question text and answer options from the Gateway Service.
-
Builds a prompt requesting a hint without revealing the correct answer.
-
Calls the LLM API and parses the response.
-
Returns a single-sentence hint.
Conversational Hint Generation (/getHintWithQuery
endpoint)
-
Similar to
/getHint
, but incorporates a user-specific query. -
Filters to prevent direct answer disclosure.
-
Builds and sends the prompt to the LLM, parses, and returns the conversational hint.
5.5. Level 3 (Refinements / Concepts)
5.5.1. Concept: Question Generation and Storage Flow
Involved components: Gateway Service, LLM Service, Wikidata Service, Question Service, Database, Wikidata SPARQL, External LLM.
-
WebApp requests questions by category from the Gateway.
-
Gateway routes to LLM Service (
/generateQuestions
). -
LLM Service fetches base data from Wikidata Service via Gateway.
-
Wikidata Service returns cached data.
-
LLM Service formats and sends a prompt to the external LLM.
-
LLM responds in JSON; LLM Service parses and validates.
-
LLM Service merges text and image, then calls
/addQuestion
via Gateway. -
Gateway routes to Question Service, which stores it in the database.
-
LLM Service returns the questions to the original caller.
5.5.2. Concept: Statistics Calculation
Responsible component: History Service
When /stats
is called:
* Retrieves all user game records.
* Calculates aggregated statistics in memory (total points, number of games, win/loss ratio, averages, most played category).
* Returns results, including the top 3 games.
⚠️ For users with very large histories, performance may degrade if all records are loaded into memory.
6. Runtime View
The Runtime View illustrates how the different components of the system interact at runtime to fulfill specific use cases. It focuses on the dynamic behavior of the system, describing the communication between the frontend, gateway, and backend microservices. Each diagram represents a specific user action or system event, highlighting the sequence of requests, validations, and data flows involved.
These diagrams help clarify how responsibilities are distributed across services and how data moves through the architecture during key operations such as user registration, login, password changes, and question generation.
6.1. User Registration

6.2. User Login

6.3. Question Generation

6.4. Answering a Question

6.5. Ask for a hint to the AI Chat

6.6. User Password Change

7. Deployment View
7.1. Infrastructure Level 1

In addition to what is shown in the diagram, we will also use arc42 for documentation.
- Motivation
-
Initially, the application is deployed using Docker in developers systems, depending on resource availability. This way, each developer will have a local deployment environment for testing.
- Quality and/or Performance Features
-
-
The system is designed with microservices, each running in its own container for better scalability and fault tolerance.
-
A database service will be used for structured data, and a File Storage Service will handle multimedia content.
-
arc42 documentation will be managed within a dedicated Docker container.
-
8. Cross-cutting Concepts
8.1. User Experience (UX)
-
Usable Interface:
Ease of Use |
A simple, predictable, and familiar interface design will be presented, ensuring that all essential elements and options of the application are easily accessible. |
Intuitive |
The system will provide an intuitive interface, making it easy for users to understand. |
Stability |
The application’s loading times will be minimized to ensure a smooth experience. |
-
Immediate Feedback: The user will instantly see whether their answer was correct or not. Additionally, the game history, rankings, and generated questions will always be up to date.
8.2. Security & Protection
-
Secure Access Control: User authentication security will be enforced by verifying the correctness of the entered credentials and denying access otherwise.
8.3. Under-the-Hood
-
Persistence: Both user data and game records will be stored to ensure their integrity and availability.
-
Maintainability: The code is written clearly and legibly, following a modular approach to facilitate maintenance when fixing bugs or adding improvements.
-
Extensibility: The application is built in a way that allows new functionalities to be added easily in the future without significantly affecting existing components.
8.4. Development
-
Implementation: The application will be developed using JavaScript. The front-end will be built with React, while Node.js and microservices architecture will be used for the back-end. MongoDB will be used for managing the NoSQL database.
-
Testing: Various tests will be conducted to ensure a high-quality product.
8.5. Architectural Style
-
Layers: A three-layer architecture will be implemented to ensure better organization and modularity:
Presentation |
Responsible for operating and generating the graphical interface displayed to the user. |
Business Logic |
Where all the necessary logic for the correct operation of the application will be executed. |
Persistence |
Used to store and retrieve various data needed for both the player and the question-and-answer game system. |
8.6. Concept Map

9. Architecture Decisions
9.1. Service-Oriented System
Our system follows a service-based architecture, where each service is responsible for a specific domain of functionality. This modular approach enhances scalability, separation of concerns, and maintainability.
Some core services include:
-
AuthService: Handles user authentication and token management.
-
UserService: Manages user profiles, roles, and preferences.
-
QuestionService: Stores and retrieves generated quiz questions.
-
LLMService: Coordinates AI-based content generation (e.g., questions and hints).
-
WikidataService: Provides trivia data retrieved from structured knowledge sources.
Each service is stateless, autonomous, and communicates with others via HTTP calls through the Gateway.
9.2. Gateway Service
The Gateway acts as the central entry point for all external clients. It is responsible for:
-
Routing requests to the appropriate internal services.
-
Centralizing authentication, authorization, and input validation.
-
Abstracting internal service structure from external consumers.
This simplifies client interaction and ensures consistent policies across the system.
9.3. Example Workflow of How the Architecture Works
To better understand how our service-based architecture operates in practice, we describe two typical workflows that involve several services working together: question generation and AI-powered hint generation.
9.3.1. Question Generation
When an admin user requests a batch of quiz questions, the system coordinates several services:
-
The frontend sends a request to the Gateway, which routes it to the service responsible for AI interactions.
-
This service fetches relevant trivia data from a knowledge base and uses an external LLM provider to generate questions based on that data.
-
Each generated question is then stored via another service that manages persistent storage.
-
Finally, the response is sent back through the Gateway to the user.
This is a good example of how services collaborate, each handling a specific responsibility: routing, LLM integration, data retrieval, and persistence.
9.4. Bulk Question Generation and Caching
To address the delay experienced when starting a game due to real-time question generation, we introduced a pre-generation and caching strategy.
-
When a new game starts, the system first attempts to load questions from the database.
-
If no suitable cached questions are found, it generates a batch of questions on the fly using the LLM and Wikidata services.
-
These questions are then saved to the database for reuse in future games.
This drastically reduces perceived latency and improves responsiveness when starting a new game session.
9.5. Technology Decisions
Each decision was made based on the project requirements, the team’s prior experience, and the technologies provided in the base template.
10. Quality Requirements
The WIChat system prioritizes five critical quality attributes to ensure success:
-
Usability:
-
Goal: An intuitive interface that allows users to play, access hints, view/edit their profile, and inspect their game statistics with minimal friction.
-
-
Maintainability:
-
Goal: Modular, well-documented codebase (separation of concerns, clear folder structure, descriptive comments) to facilitate onboarding and future feature additions (e.g., new quiz categories).
-
-
Performance:
-
Goal: Low latency in all interactive operations:
-
Quiz question generation (≤ 200 ms per request under normal load).
-
LLM prompt/response turnaround (≤ 500 ms).
-
Profile and statistics retrieval (≤ 150 ms).
-
-
Security:
-
Goal: Robust protection of user credentials and session data, and resistance to common web attacks:
-
Password hashing: All passwords are salted and hashed with bcrypt before storage.
-
Session tokens: JWTs signed with strong secret and short TTL (1 h), transmitted via HTTPS; no use of opaque session IDs in URLs.
-
No token reuse: Tokens are checked on every request; unable to override identity by simply including a different token.
-
Input validation: Every input field (e.g., login credentials) is validated
-
-
Functionality:
-
Goal: Reliably satisfy core business workflows:
-
Automatic question/answer generation from Wikidata entries (with image support).
-
Conversational hint service with RAG-based hallucination mitigation.
-
Accurate user statistics tracking and retrieval.
-
User profile editing (username, password, avatar) with immediate effect.
-
These quality requirements guide every architectural and implementation decision, ensuring WIChat meets stakeholder expectations and delivers a seamless, secure user experience.
10.1. Quality Tree
Quality Attribute | Goal/Description | Associated Scenarios |
---|---|---|
Usability |
Intuitive UI for gameplay, hints, stats viewing, and profile editing |
|
Maintainability |
Modular, documented code; clear separation of services and helpers |
|
Performance |
Low-latency in question generation, LLM calls, and user data access |
|
Security |
Bcrypt-hashed passwords, JWT auth, input validation |
|
Functionality |
End-to-end quiz flows, hints with RAG, user stats and profile management |
10.2. Quality Scenarios
Scenario | Stimulus / Source | Environment | Artifact | Expected Response |
---|---|---|---|---|
System requests N new quiz questions |
Normal load |
Wikidata Service & LLM |
All questions match real data; no hallucinations |
|
20 users simultaneously start quizzes |
Peak demand |
Quiz API |
Average response ≤ 200 ms |
|
Malicious login attempts (brute-force) |
Internet |
Auth Service + Database |
Passwords hashed with bcrypt; input validated; JWT sessions managed securely |
|
New user navigates to hints, stats, profile |
First use |
Frontend UI |
Hints, stats, profile pages load in ≤ 150 ms; clear labels & error feedback |
|
Developer adds “Art History” category |
Dev environment |
Codebase |
Module and tests added in ≤ 2 hours |
11. Risks and Technical Debt
11.1. Technical Risks
-
Inadequate Version Control Management
-
Possible Issues
-
GitHub conflicts due to multiple team members collaborating.
-
Risk of code loss or overwriting.
-
-
Preventive Measure
-
Define a clear Git workflow with mandatory Pull Requests.
-
-
-
Tight Deadlines and Lack of Experience
-
Possible Issues
-
Inability to complete planned tasks due to other courses or poor time estimation.
-
Difficulties in implementing advanced features due to lack of experience in JavaScript.
-
Increased number of errors due to limited proficiency in the language.
-
-
Preventive Measure
-
Better task organization and development time estimation.
-
Self-learning of the language to improve proficiency.
-
-
-
Documentation Deficiencies
-
Possible Issues
-
Code with few comments and insufficient technical documentation.
-
Difficulty for other team members to understand the existing code.
-
-
Preventive Measure
-
Maintain clear and up-to-date documentation in the GitHub repository.
-
-
-
Lack of Automated Testing
-
Possible Issues
-
Dependence on manual testing, which is prone to errors.
-
Increased time to detect and fix bugs.
-
-
Preventive Measure
-
Introduce unit and functional testing using tools like Jest or Mocha.
-
-
-
Lack of Code Standards
-
Possible Issues
-
Different programming styles within the team.
-
Difficulty in unifying code from different team members.
-
-
Preventive Measure
-
Define common code standards to ensure consistency and ease of collaboration.
-
-
-
Inefficient and Repetitive Code
-
Possible Issues
-
Lack of modularity and code reuse.
-
Difficulty in project maintenance and scalability.
-
-
Preventive Measure
-
Apply modular programming principles and perform periodic refactoring.
-
-
-
Suboptimal Performance
-
Possible Issues
-
Inefficient use of data structures and algorithms.
-
Potential performance issues during application execution.
-
-
Preventive Measure
-
Review and optimize the code once it is functional.
-
-
11.2. Technical Debt
-
Tasks organization
-
Poor organization of the tasks to be carried out. We started by selecting a few key points to work on, but as we developed them and due to the relationships between them, we ended up covering more than what was initially selected. As a result, our current application has a significant amount of development across all parts, but none of them are fully complete or entirely functional.
-
-
No Tests created
-
Functionality has been developed, but no testing has been done. This means our code has a coverage percentage below the required level, and it is not entirely clear whether everything implemented works correctly. Tests must be carried out as soon as possible, and proper planning is necessary since they take time and should not affect development.
-
-
Tests created with low coverage
-
Initially, tests were created with a low coverage percentage. While some progress has been made in increasing the test coverage, it is still under 80%. Efforts need to continue to ensure comprehensive test coverage across the application.
-
-
Using Empathy, but switching to Gemini
-
We initially used Empathy for LLM-based responses. However, the model frequently failed, causing delays and extra work. Over time, this debt accumulated, and we eventually replaced it with Gemini. This decision helped stabilize the application’s performance but caused some disruptions earlier in development.
-
-
Direct service calls without using the Gateway
-
At the start, services were implemented and called directly without using a Gateway. This decision led to technical debt, as we had to invest significant time later in the development process to refactor and introduce the Gateway, which is necessary for scalability and proper separation of concerns.
-
-
Insufficient documentation updates
-
The documentation was not sufficiently updated during development, leading to discrepancies between the actual implementation and the documentation. As a result, certain parts of the project were not properly reflected, making it difficult for team members and future contributors to follow the project flow.
-
-
Initial game images generated with AI causing delays
-
At first, we used AI to generate images for the game based on the questions. However, this approach led to increased wait times, and the quality of the images was not optimal. Subsequently, time was spent switching to images obtained from Wikidata, which was a more efficient solution.
-
12. Glossary
Term | Definition |
---|---|
JavaScript |
JavaScript is a high-level, dynamic, and event-driven programming language primarily used for web development. It is an interpreted language, prototype-based, and weakly typed, running in the browser through the JavaScript engine. It can also be used on the backend with environments like Node.js. |
Frontend |
JavaScript in the frontend is used to manipulate the DOM, handle events, and enhance web interactivity. It runs in the browser and works alongside HTML and CSS to create dynamic experiences. Frameworks like React make it easier to develop more structured and efficient applications. |
React |
React is a JavaScript library for building user interfaces efficiently and modularly. It follows a component-based approach and uses a Virtual DOM to improve performance. Developed by Facebook, it is primarily used in the frontend to create interactive and dynamic web applications. |
Backend |
On the backend, JavaScript is used with environments like Node.js to handle servers, databases, and business logic. It enables API creation, HTTP request management, and database connections with systems like MongoDB. |
Node.js |
Node.js is a JavaScript runtime environment based on Chrome’s V8 engine, designed to execute code outside the browser. It is asynchronous and event-driven, making it ideal for real-time applications and scalable servers. It uses the CommonJS module system and has npm for package and dependency management. |
MongoDB |
MongoDB is a NoSQL document-oriented database that stores data in BSON format (similar to JSON). It is scalable, flexible, and allows handling large amounts of data without a fixed structure. It integrates well with Node.js and is commonly used in modern web applications. |
13. Test Documentation
This section provides a structured overview of the automated tests implemented in the project.
13.1. Unit Testing
13.1.1. historyservice
Tests related to storage and processing of chat and game history.
-
hitory-model.test.js
: Tests the model responsible for storing and retrieving chat history. -
history-stats-service.test.js
: Validates statistical calculations on historical data.
13.1.2. questionsService
Tests for handling questions and user-question interaction history.
-
questions-model.test.js
: Ensures question model logic behaves as expected. -
question-history-service.test.js
: Tests logic related to question selection and historical storage.
13.1.3. webapp (Frontend React components)
Tests for UI components and their auxiliary logic.
-
HowToPlayWindow.test.js
: Verifies tutorial/instruction modal behavior. -
editProfileWindowAuxFunc.test.js
: Tests helper functions for profile editing. -
gameOptions.test.js
: Validates correct handling of game configuration inputs. -
editProfileWindow.test.js
: UI test for editing user profile. -
QuestionTimer.test.js
: Checks timer countdown and time expiration behavior. -
AddUser.test.js
: Tests component for adding new users. -
GameWindow.test.js
: UI and state transition checks for the main game view. -
ChatClues.test.js
: Ensures clues are displayed properly during the game. -
home.test.js
: Tests the landing page behavior. -
StatisticsWindow.test.js
: Ensures user stats display correctly. -
allQuestionsWindow.test.js
: Tests display of all answered questions. -
Game.test.js
: Core game logic integration test. -
navBar.test.js
: Ensures the navbar displays and routes correctly. -
EndGameWindow.test.js
: Validates the end-of-game summary screen. -
Login.test.js
: Tests login UI behavior and validation. -
Auth.test.js
: Authentication-related logic and routing tests.
13.1.4. gatewayservice
-
gateway-service.test.js
: Validates core functionalities of the gateway layer and inter-service routing.
13.1.5. llmservice
Tests for interactions with the language model backend.
-
llm-service.test.js
: Main test suite for verifying service logic. -
getRandomEntriesAuxiliaryFunction.test.js
: Tests helper for random entry selection. -
generate-questions.test.js
: Validates question generation logic. -
parseJSONAuxFunction.test.js
: Tests robustness of JSON parsing. -
sendQuestionToLLM.test.js
: Ensures API communication with LLM works correctly. -
llm-service-AuxiliaryFunctions.test.js
: Collection of miscellaneous helper function tests.
13.1.6. wikidataservice
Tests for caching and querying Wikidata.
-
wikidataCacheService.test.js
: Validates caching behavior. -
wikidata-service.test.js
: Tests high-level query logic. -
wikidataQueries.test.js
: Ensures individual query builders function correctly.
13.1.7. userservice
-
auth-service.test.js
: Tests user authentication logic. -
user-model.test.js
: Model tests for user creation, validation, and retrieval. -
user-service.test.js
: Service-level tests for managing user data.
13.2. Load Testing with Gatling
A load test was performed using the Gatling tool to evaluate the performance of the application under stress.
13.2.1. Objective
To assess the behavior of critical endpoints under high user concurrency conditions.
13.2.2. Test Design
-
Tool: Gatling
-
Endpoints tested:
-
POST /adduser
(user registration) -
POST /login
(user authentication) -
Simulated users: 1000
-
Ramp-up duration: 60 seconds
-
Pause strategy: Controlled pauses between actions to simulate realistic user behavior
-
Assertions:
-
Maximum response time must be ≤ 5000 ms
-
At least 95% of requests must succeed (HTTP 200 or 201)
13.2.3. Results
-
The system respected the maximum response time threshold of 5000 ms.
-
Success rate consistently exceeded 95%.
-
HTML reports (index.html and request-specific reports) confirmed that both login and registration endpoints handled the load without significant performance degradation.
13.2.4. Conclusion
The system demonstrated robustness and scalability, successfully supporting high traffic without compromising stability or performance.
13.3. End-to-End (E2E) Testing
End-to-End tests were implemented using the jest-cucumber
and puppeteer
frameworks. These tests simulate real user behavior interacting with the application through the browser, ensuring the system works as a whole.
13.3.1. Tools and Frameworks
-
Test Runner: Jest
-
BDD Layer: jest-cucumber
-
Automation: Puppeteer
-
Execution: Locally and in CI (GitHub Actions compatible)
13.3.2. E2E Scenarios Implemented
-
register-form.feature
+01-register-form.steps.js
:-
Simulates a new user registering on the platform.
-
Verifies the presence of a success message upon form submission.
-
-
login-form.feature
+02-login-form.steps.js
:-
Simulates login of a previously registered user.
-
Verifies redirection to
/home
on success.
-
-
stats-access.feature
+04-stats-access.steps.js
:-
Simulates a user logging in and navigating to the statistics page.
-
Ensures redirection to
/statistics
occurs successfully.
-
-
questions-access.feature
+05-questions-access.steps.js
:-
Simulates a user logging in and accessing the questions page.
-
Validates redirection to
/questions
.
-
13.3.3. Conclusion
These E2E tests cover critical user flows including registration, authentication, and navigation. They are crucial for regression testing and confidence in the deployed UI.