About arc42

arc42, the template for documentation of software and system architecture.

Template Version 8.2 EN. (based upon AsciiDoc version), January 2023

Created, maintained and © by Dr. Peter Hruschka, Dr. Gernot Starke and contributors. See https://arc42.org.


Note

This version of the template contains some help and explanations. It is used for familiarization with arc42 and the understanding of the concepts. For documentation of your own system you use better the plain version.

1. Introduction and Goals

WIChat is an AI-powered web application for RTVE, aiming to develop an online quiz inspired by Saber y Ganar. This project will introduce a conversational AI feature that provides hints to contestants using an LLM.

This document describes the key requirements and quality goals for the project. It also identifies the stakeholders and their expectations.

1.1. Requirements Overview

  • User registration and authentication: Users must be able to create accounts and log in.

  • Image-based questions: The game will display an image, and users must identify its location.

  • AI-generated hints: Players can request hints about the image through a conversational interface powered by an external LLM.

  • Automatic content generation: Images and hints must be dynamically created using Wikidata.

  • Game scoring and rewards: Users earn points or rewards for correct answers.

  • API access: External systems should be able to retrieve quiz data and user performance information through an API.

  • Time-limited responses: Each question must be answered within a given timeframe.

  • Leaderboard: Users can view their ranking and compare their performance with others.

1.2. Quality Goals

Quality Goal Motivation Concrete Scenario

Usability

The game must provide an intuitive and engaging user experience

The UI should be clear and easy to navigate, with a conversational AI that is easy to interact with

Performance

AI-generated hints should be delivered quickly to avoid disrupting gameplay

Hints generated via the LLM should have a response time of less than 3 seconds

Security

Protecting user data and preventing AI model abuse is critical

User interactions should be encrypted and respect user privacy and data protection laws.

AI Adaptability

The LLM should provide relevant and contextually appropriate hints

The AI must be fine-tuned to ensure accurate and helpful responses based on Wikidata sources

Scalability

The system must handle multiple simultaneous users without slowdowns

The game should support multiple concurrent players with minimal latency

1.3. Stakeholders

Role/Name Contact Expectations

Users

General public, quiz players

Engaging and interactive quiz experience with AI-generated hints

Development Team

Carballo Pérez, Andrés (UO287983@uniovi.es)
Gutiérrez García, Miguel (UO295650@uniovi.es)
Mahía Loredo, Adrián (UO289390@uniovi.es)
Pérez Cuervo, Bruno (UO295445@uniovi.es)

Well-documented architecture and a system that is easy to maintain and extend

Professors

Pablo González González
(gonzalezgpablo@uniovi.es)
Jose Emilio Labra Gayo
(labra@uniovi.es)
Diego Martín Fernández
(martinfdiego@uniovi.es)
Irene Cid Rico
(cidirene@uniovi.es)

A project demonstrating LLM integration and structured team collaboration

RTVE

https://www.rtve.es

An engaging online game that aligns with their brand and audience expectations

2. Architecture Constraints

Restriction Description

Git/GitHub

Git will be used as the version control system and GitHub as the platform to store the code, allowing for efficient code management, project change tracking, and facilitating collaboration among team members. Additionally, a lack of proficiency with these tools could lead to lower efficiency and greater challenges.

Docker

The application will be deployed using Docker, allowing for a more consistent and flexible deployment, which facilitates the management of scalability and dependencies.

AsciiDoc

All documentation associated with the project must be developed using AsciiDoc, a lightweight and structured format.

Time

The project must be developed, tested, and completed according to the established deadlines, with organization and effective use of time by the team, including meetings and planning, being key.

Web Accessible

The application must be deployed and accessible via web, allowing users to interact with the system from any device with an internet connection, without the need for local installations.

Wikidata

The system must obtain data to generate questions and images from Wikidata, ensuring a structured, updated, and verifiable source of information.

LLM

A language model will be integrated to generate hints for each question through interaction with the application.

3. Context and Scope

3.1. Business Context

Business Context
Element Description

User

The contestant who interacts with the application, can ask the integrated chatbot, and answer the displayed questions.

WIChat

Main web application where the game logic takes place.

Wikidata

Source from which the questions and answers will be extracted.

LLM_API

API integrating an LLM that will be used to dynamically generate hints and follow a conversational dialogue to help the contestant answer the questions without explicitly giving away the answer.

Database

Storage system that keeps relevant user information and past game sessions and also stores questions, there’s a one instance for user related information and another one for questions .

3.2. Technical Context

3.2.1. Architecture Diagram

Architecture Diagram

3.2.2. Technical Interface Explanation

Gateway

API that links the different parts of the application.

React Application

React provides the user with an interface to interact with, answer questions, and make requests to the backend.

User Database

Stores all information related to users, data, and game session history.

Authentication Service

Interface that communicates with the user database to verify whether a login attempt is valid.

User Service

Interface that communicates with the user database to query or update game-related information.

LLM Service

Service that processes hints using Gemini AI and returns responses to the user’s questions via prompt engineering on the code, so it only answers game-related questions.

Question Service

Service that processes information from Wikidata to generate questions and possible answers.

Question Database

Stores quiestions generated in the question service for later use in the game.

WikiData API

WikiData provides all question-related content, including images, answers, and the correct response.

Gemini AI

The LLM we are using for hints.

3.2.3. Input/Output Channel Mapping

Channel Input Output

React Application

HTTP requests from the user with actions

Response through the interface.

Gateway

REST requests from the web app to obtain question data (images, answers) or perform operations like login

Response with the requested information in each case.

User Service

Data queries to the database

Response to the query.

Authentication Service

User login data

Data verification query to check if the credentials are correct.

LLM Service

Prompt with the hints requested by the user

Hint generated by the model.

WikiData API

Request for question information

Response with the requested information.

Question Service

A request from the game when it needs a question

Question and answers generated using the information retrieved from Wikidata API.

User Database

SQL instructions for queries or insertions

Query results or confirmation of insertions.

Question Database

Fully generated questions from question service

The questions when the game needs them.

Prometheus

Gateway metrics

Metrics stored in its database.

Grafana

Metrics organized in Prometheus

Visualization of metrics.

4. Solution Strategy

4.1. 4.1. Technology Decisions

As part of the project’s initial planning phase, we have identified a set of technologies that we believe will be well-suited to implement the WiChat application. These technologies align with the project requirements and will guide the development process once the implementation begins.

  • JavaScript – A versatile, high-level programming language widely used for developing dynamic and interactive web applications. It allows for both client-side and server-side development, making it an ideal choice for the full-stack needs of the project.

  • React – A JavaScript library designed for building efficient user interfaces for single-page applications. React’s component-based architecture will enhance reusability and maintainability, which will be key once development begins.

  • Node.js – A runtime environment for executing JavaScript on the server side. Node.js is well-suited to handling asynchronous operations, making it a good choice for scalable network applications and managing database interactions.

  • Express.js – A lightweight web framework built on top of Node.js, planned to simplify backend development. Express will provide essential features such as routing, middleware management, and API creation, ensuring smooth backend processes during the project’s development.

  • MongoDB – A NoSQL, document-oriented database. MongoDB offers flexibility, scalability, and high availability. Its JSON-like document structure is well-suited for applications that may need to dynamically adjust the database schema as the project evolves.

  • Docker – A containerization platform that allows developers to package applications along with their dependencies, ensuring consistent execution across different environments. This will simplify deployment and enhance scalability when the system is up and running.

  • GitHub – A version control platform that will be essential for managing the project’s codebase. It will enable collaborative development, tracking of changes, and automation of workflows, particularly important for a smooth development process as the team grows.

4.2. 4.2. System Architecture

The architecture for the WiChat application is currently planned to be based on a services architecture, a modular approach that allows distinct services to be developed and maintained independently, it’s not exactly a microservices arquitecture due to the fact that user and auth service share the same database instance, quiestion service has another one different though. This structure is chosen to provide scalability, flexibility, and ease of maintainability as the project progresses.

The system will consist of the following services, each designed to handle specific responsibilities:

  • User Service – This service will manage user registration, profile management, and the friend system.

  • Gateway Service – The gateway service will serve as the public-facing entry point, routing incoming requests to the appropriate microservices.

  • WebApp Service – This is the frontend web application, which will interact with the gateway service to provide essential functionalities like user authentication.

  • Auth Service – The auth service will manage user authentication processes, ensuring secure login and session handling.

  • Question Service – This service will gather information from Wikidata to construct questions and store them in the database, saving them for later use in the game.

These decisions are being made with the goal of setting up a solid foundation for the development, ensuring that once work begins, the system will be modular and capable of scaling as needed.

4.3. 4.3. Decisions Made to Achieve Quality Goals

Before beginning the development of the project, we identified several quality goals that will guide our design and implementation strategy. The following decisions were made to address these goals and ensure the system meets the expected standards once development begins.

Quality Goal Decision Made

Usability

The application must be designed to be intuitive and accessible for users with various backgrounds. To ensure usability, we have planned: * Conducting user feedback sessions and usability tests at the early stages of development. * Choosing an appropriate typography style and sizing to ensure legibility and user comfort.

Performance Efficiency

The system needs to be optimized to handle potential traffic surges and deliver quick responses to users. The following strategies are planned to optimize performance: * Reducing request frequency to external services (like WikiData) to avoid overloading the system. * The information retrieved from WikiData and processed by the question service will be stored in a database for a quick access.

Maintainability & Scalability

The system must be modular, easy to maintain, and scalable for future growth. To ensure long-term maintainability and scalability, we have outlined the following approaches: * Choosing a microservices architecture that allows for modular development and easy updates or additions. * Implementing key design patterns, including: Gateway Pattern – A single entry point for external requests to streamline access. Single Responsibility Principle – Ensuring each service has a focused and clear role. * Maintaining clean and documented code, following industry best practices to ensure ease of future modifications.

Security

Protecting user data and maintaining the security of the system is essential. The following security measures are being considered: * Utilizing JWT-based authentication to securely manage user sessions. * Implementing role-based access control (RBAC) to restrict access to sensitive functionalities. * Ensuring secure coding practices to protect against threats such as SQL injection and cross-site scripting (XSS). * Enabling SSL/TLS encryption to secure data during transmission between clients and servers.

Availability & Reliability

The application must be available at all times and provide a reliable experience to users. To ensure availability, the following actions are being planned: * Configuring automated monitoring and alerting systems to detect potential issues and trigger immediate resolution processes. * Designing the system with fault tolerance in mind, so that key services can continue running even in case of failures.

4.4. 4.4. Organizational Decisions

We have outlined the following organizational decisions to help guide the development process once the project begins:

  • Task Distribution – We plan to distribute tasks as evenly as possible among team members to ensure a balanced workload. The goal is to make sure that once a task is completed, at least 50% of the team should review it before it is considered valid. This review process will help maintain high-quality standards throughout the development cycle.

  • GitHub Issues – To manage tasks effectively, we will utilize GitHub Issues. This will allow us to create, assign, and track tasks while also providing a platform for discussing any critical decisions that may arise during development.

  • GitHub Actions – We will be using GitHub Actions to make workflows for CI/CD, deploying documentation and deploying the application itself, this will help us to track better our progress and find bugs during development.

  • Language – The documentation and code will be developed in English. This decision ensures that both the code and documentation are accessible to all team members, as well as to future contributors.

  • External Meetings – Regular external meetings will be held to review the project’s progress and determine the next steps. These meetings will help ensure we stay aligned with the project goals and timelines.

  • Internal Communication – We will use Discord and WhatsApp as the primary communication platforms for internal discussions. This tools will facilitate real-time collaboration and help resolve issues as they arise during development.

  • Documentation – For documentation purposes, we have decided to use Asciidoc. This tool was chosen because it simplifies the process of deploying and maintaining project documentation, making it easier for the team to collaborate and contribute.

5. Building Block View

5.1. Whitebox Overall System

First Level

First level
Motivation

WIChat is the general structure of a system in which a user can play the game interacting with an LLM for hints.

Contained Building Blocks

Name

Description

User

Client of the application which will interact with it.

WIChat

System developed to be used by the users.

Wikidata

Aplication to generate the questions and answers.

Gemini AI

API used for the hints.

Important Interfaces

- API communication between WIChat and external services (WikiData, Gemini AI).

5.2. Level 2 - Internal Structure of WIChat

Detailed System Overview

Second level
Contained Building Blocks
Name Description

GatewayService

Manages incoming requests and routes them to appropriate services.

UserManager

Handles authentication, user management, and ranking system.

LLMService

Connects to Gemini AI to generate hints for users.

QuestionService

Communicates with WikiData to retrieve questions and answers.

Wikidata

External service for retrieving data to compose questions and answers.

WebApp

User-facing interface for gameplay and interactions.

Important Interfaces
  • REST API for WebApp to communicate with GatewayService.

  • Internal service-to-service communication for user authentication and data retrieval.

5.3. Level 3 - Detailed Breakdown of GatewayService

GatewayService Internal Structure

Third level
Contained Building Blocks
Name Description

UserManager

Submodule responsible for managing users and authentication.

AuthService

Handles authentication and security mechanisms.

UserService

Manages user profiles, sessions, and stores and updates leaderboard data.

QuestionService

Retrieves and formats questions from WikiData.

MongoDB Database

Stores user information, rankings, and game data in the user database and stores questions in the question database.

Important Interfaces
  • Communication between UserManager and MongoDB for user data storage.

  • Integration of QuestionService with WikiData API to fetch questions.

  • Secure authentication flows via AuthService.

6. Runtime View

6.1. Login

Login Diagram

6.2. Question generation

Question Generation Diagram

6.3. How does the user gets the questions to play the game

Get Questions Diagram

6.4. Clue Generation

Clue Generation Diagram

6.5. Game

Game Process Diagram

6.6. Show User Info

Show User Info Diagram

7. Deployment View

7.1. Infrastructure

The following overview diagram shows the overall deployment of the Quiz Web Application. It illustrates how the system is distributed within an Azure Virtual Machine using Docker containers, and how the individual services interact with each other as well as with external services.

Motivation

The infrastructure has been designed to support a distributed system where each microservice is encapsulated in its own Docker container. This approach facilitates scalability, ease of deployment, and fault isolation. The use of an Azure VM provides a robust hosting environment, while external integrations (as with Wikidata) enrich the application’s functionality.

Mapping of Building Blocks to Infrastructure

Software components are each deployed in their own Docker container within Azure. The GatewayService orchestrates communication among these services, while persistent data is managed by a MongoDB database. The QuestionService also interacts with the external Wikidata service to retrieve dynamic content for the quiz.

Deployment Diagram for Quiz Web Application

7.1.1. Azure VM and Docker Containers

Within Azure, the following Docker containers are deployed:

  • WebApp: Hosts the Quiz Application front-end, which is accessed via a Web Browser by the end user.

  • GatewayService: Acts as the central API gateway, routing user requests from the WebApp to the appropriate back-end services.

  • AuthService: Manages user authentication and connects to the database for credential verification.

  • UserService: Handles user management functions, including registration, profile updates, and statistics.

  • LLMService: Processes natural language queries and interacts with the database to store and retrieve processed data.

  • QuestionService: Generates quiz questions and, in addition to accessing the internal MongoDB, retrieves external data from Wikidata.

  • MongoDB (Database): Provides persistence for user data and other application data like questions.

Communication between these containers is managed internally within Azure, ensuring secure and efficient data transfer.

7.1.2. External Services

  • Wikidata: The QuestionService accesses Wikidata to fetch additional data for quiz questions.

  • Gemini AI: The LLM Service uses Gemini to give the user the hints requested.

8. Cross-cutting Concepts

8.1. Domain Concepts

The WIChat system is built around the concept of an interactive question-and-answer game where users receive hints to guess the correct answer. The system integrates data from WikiData and utilizes a Large Language Model (LLM) to generate dynamic hints.

Key domain concepts include:

Users: Participants interacting with WIChat.

Questions & Answers: Content dynamically generated from WikiData.

Hints: AI-generated assistance to guide users.

Ranking: Scores and leaderboards to enhance engagement.

Game settings: Personalize your games

8.2. User Experience (UX) Concepts

User experience is a fundamental aspect of WIChat, ensuring seamless interaction and intuitive design.

Minimalist UI Design: A clean and distraction-free interface.

Real-time Feedback: Immediate responses to user input.

Gamification Elements: Leaderboards, game settings and friend system to boost user engagement.

8.3. Security and Safety Concepts

Security is a critical component of WIChat, ensuring safe interactions and data integrity.

Authentication and Authorization: User authentication through secure protocols and token validation to access protected resources.

Input Validation: Preventing injection attacks and ensuring data integrity.

8.4. Architecture and Design Patterns

WIChat follows established software design principles to ensure scalability and maintainability.

Services Architecture: Decoupled services to improve scalability, its not pure microservices because the auth service and user service both share the same instance in the database, question service uses other one different though.

8.5. "Under-the-hood" - Technical Implementation

Backend: Implemented using Node.js with Express.

Frontend: Built with React for a dynamic user experience.

Database: MongoDB as the primary data store.

LLM Integration: Gemini AI used for hint generation.

External Data Source: WikiData for retrieving question-related information.

External Data Source: Gemini AI retrieves hints for the user.

8.6. Development Concepts

Version Control: Git and GitHub for source code management.

Continuous Integration/Deployment (CI/CD): Automated testing and deployment pipelines.

Code Reviews: Peer reviews to maintain code quality.

8.7. Operational Concepts

Logging & Monitoring: Centralized logging and performance monitoring tools.

Error Handling: Graceful degradation strategies for unexpected failures.

9. Architecture Decisions

The Architectural Decision Records (ADRs) have been incorporated into the Wiki section of our GitHub page. The corresponding links are provided below:

Content

This section contains all quality requirements as quality tree with scenarios. The most important ones have already been described in section 1.2. (quality goals)

Here you can also capture quality requirements with lesser priority, which will not create high risks when they are not fully achieved.

Motivation

Since quality requirements will have a lot of influence on architectural decisions you should know for every stakeholder what is really important to them, concrete and measurable.

Further Information

See Quality Requirements in the arc42 documentation.

9.1. Quality Tree

Quality Category Quality Description Scenario

Usability

Easy to use

Ease of use by the user when playing games or moving around the application.

SC1

__

Easy to learn

Game modes should be intuitive.

__

Maintainability

Robustness

The application must be able to respond to user requests first.

SC2

__

Persistence

There will be no partial loss of user information and data.

__

Performance efficiency

Response time

The application should not exceed 3 seconds of waiting time for any job.

SC3

Security

Integrity

User data must be kept confidential and secure at all times.

__

Content

The quality tree (as defined in ATAM – Architecture Tradeoff Analysis Method) with quality/evaluation scenarios as leafs.

Motivation

The tree structure with priorities provides an overview for a sometimes large number of quality requirements.

Form

The quality tree is a high-level overview of the quality goals and requirements:

  • tree-like refinement of the term "quality". Use "quality" or "usefulness" as a root

  • a mind map with quality categories as main branches

In any case the tree should include links to the scenarios of the following section.

9.2. Quality Scenarios

Id Scenario

SC1

A new user registers in the application and can start playing without the need to view a user guide.

SC2

A user performs an action in the application that results in an internal error, but the user can still use the application normally.

SC3

A playing user will be viewing the different questions with little or no waiting time.

Contents

Concretization of (sometimes vague or implicit) quality requirements using (quality) scenarios.

These scenarios describe what should happen when a stimulus arrives at the system.

For architects, two kinds of scenarios are important:

  • Usage scenarios (also called application scenarios or use case scenarios) describe the system’s runtime reaction to a certain stimulus. This also includes scenarios that describe the system’s efficiency or performance. Example: The system reacts to a user’s request within one second.

  • Change scenarios describe a modification of the system or of its immediate environment. Example: Additional functionality is implemented or requirements for a quality attribute change.

Motivation

Scenarios make quality requirements concrete and allow to more easily measure or decide whether they are fulfilled.

Especially when you want to assess your architecture using methods like ATAM you need to describe your quality goals (from section 1.2) more precisely down to a level of scenarios that can be discussed and evaluated.

Form

Tabular or free form text.

9.2.1. Change Scenarios

Change Scenario Motivation Expected Response Priority

Adding New Quiz Categories

To expand content and keep the quiz engaging.

New categories can be integrated into the system without impacting existing functionality.

Medium

Enhancing Chat Assistance

To improve user support by leveraging upgrades in the Gemini LLM.

The chat module can be updated or replaced with enhanced capabilities without affecting core quiz functionality.

Medium

Adding a ranking

To give the user a more competitive view of the game.

The dedicated user service can be extended for it to save more information about users to represent it on the ranking.

Medium

Game History

Give the user the opportunity to see what went wrong (or good).

User service is extended for this, making possible to view the last games the user has played.

Medium

Friends System

To give the user the possibility of adding users.

User service is extended for this, a user can add another one as their friend to see how it’s doing in the app. This functionality can be easily extended to build multiplayer games where you can play against your friends.

Medium

Adding an admin panel

To give the admin a clear view of the users in the system and manage them.

User and auth services are extended for this, as the admin can delete users (work for user service) and to enter the admin panel the user has to be an admin (authorization is work for the auth service).

Medium

Game Settings

To give the user the chance of adjusting parameters to play the game (response time, number of questions, what type of questions does he wants).

Question service is adapted for this, as it is responsible for the questions and the game component is adapted to handle the new parameters. User service is extended too to save the user settings for the next time he wants to log in again into the app.

Medium

10. Risks and Technical Debts

10.1. Risks

Risk Description

Familiarity with technologies

Many of the base technologies used by the project, such as docker, asciidoc or react, are new to the team. The unfamiliarity, specially at the begginging, can cause a slow development and bad decisions.

Implementation

Team members mistakes such as the incorrect/unefficient implementation of a solution can cause big errors. Finding or fixing this errors takes time, which is a very limited reosurce in this project.

Time

This project must be submmited within the agreed dates, and any decision taken to meet this deadlines can cause very different problems. These decisions include deleting functionalities, lack of testing, bad coding or insufficient documentation(including both the GitHub and the source code).

Teamwork

The methodology used and the participation, communication and commitment of the team are cruzial for the succes of this project.

Design

A bad desing decision at the beggining of the project can cause a more difficult development, with more complications when updating or scalating the project, specially in a an application of this size and complexity.

Wikidata

Every aspect related to Wikidata: lack of knowledge wiht the platform, incorrect use, possible unavailability of the platform…​

Gemini AI

Gemini API is not free, so it can be a big bottleneck if we send too many requests in a short period of time.

10.2. Technical Debt

Technical Debt Description

Documentation

The lack of proper documentation can make the project difficult to mantain or upgrade in the future, as it can increase the difficulty of finding mistakes.

Quality code

Prioritizing time over quality when coding can decrease the efficiency of the project. Unefficient methods and repetitive structures should be avoided.

Test-coverage

A bad test-coverage can cause long-term problems, as the project could keep developing with udentified mistakes.

Obsolote technologies

The use of obsolete or unsupported technologies such as frameworks or libraries can lead to reducted performance or security issues.

11. Testing Report

11.1. Unit Tests

Unit tests were conducted across all services and the WebApp frontend to ensure the correct behavior of individual components and services. The primary goals were:

  • Validate the functionality of individual methods, classes, and components.

  • Detect bugs early in the development process.

  • Maintain a high standard of code quality.

Component Summary of Tests

User Manager

Tests cover user information, score tracking, and leaderboard updates.

AuthService

Tests include login validation, token generation, and security checks.

LLMService

Covers scenarios for hint generation and integration with the Gemini AI API.

QuestionService

Tests check the information retrieved from WikiData and generated questions to meet standards.

GatewayService

Tests for the endpoints, makes sure all work as they are meant to.

WebApp (Frontend)

Component-level testing for React components, including dynamic rendering and user interaction.

11.2. Acceptance Tests (End-to-End)

End-to-end (e2e) tests were conducted to validate the system’s behavior under real-world scenarios. These tests covered the following features:

11.2.1. Feature: Accessing User Statistics as User

Scenario Steps

The user wants to access their statistics

Given: A user with name "testuser" and password "testuser" When: I log in with the user credentials And: I press the statistics button Then: I should see the profile page for "testuser"

11.2.2. Feature: Playing the Game

Scenario Steps

The user plays a game with their own game values and logs out

Given: A user with name "test" and password "test" When: The user logs in with the user credentials And: The user presses the account button And: The user presses the settings button And: The user changes the value number of questions to 2 And: The user saves the changes And: The user presses the play button And: The user answers the first question And: The user answers the second question And: The game ends and the message "¡Resumen de la partida!" is shown And: The user presses the go menu button And: The user presses the account button And: The user presses the logout button Then: The home page is shown

The user plays a game and the statistics are saved

Given: A user with name "test" and password "test" When: The user logs in with the user credentials And: The user presses the play button And: The user answers the first question And: The user answers the second question And: The game ends and the message "¡Resumen de la partida!" is shown And: The user presses the go menu button And: The user presses the account button And: The user presses the profile button Then: The user played games statistic is 2

11.2.3. Feature: Registering a New User

Scenario Steps

The user is not registered on the site

Given: A user with name "admin3" and password "admin3" When: The user fills the data in the form and presses submit Then: The confirmation message "Redirecting to login…​" should be shown on the screen

The user is not registered but the username is not valid

Given: A user with name "a" and password "admin3" When: The user fills the data in the form and presses submit Then: The message "Username must be at least 3 characters long" should be shown on the screen

The user is not registered but the password is not valid

Given: A user with name "admin4" and password "a" When: The user fills the data in the form and presses submit Then: The message "Password must be at least 3 characters long" should be shown on the screen

The user is not registered but the username is already taken

Given: A user with name "admin3" and password "admin3" When: The user fills the data in the form and presses submit Then: The message "duplicate key error" should be shown on the screen

11.3. Coverage

For our project, we implemented thorough software testing practices to ensure code quality and reliability. Specifically, we utilized SonarQube’s code coverage metric to measure how much of our codebase is covered by automated tests. Code coverage is a critical quality metric that indicates the percentage of code that is executed during the testing process. It helps identify areas of the codebase that lack proper testing, potentially harboring undiscovered bugs or issues. Through our continuous integration pipeline, we achieved a code coverage rate of 81.5%. This means that our test suite successfully executes 81.5% of the application code, providing a substantial degree of confidence in the stability and correctness of our implementation. This coverage level reflects our commitment to developing a robust and reliable decentralized chat application that aligns with the project’s quality standards.

11.4. Load Testing

For detailed results of our load testing process, please refer to our Load Testing Report

11.5. Monitoring/Profiling

To validate the behavior and performance of the WIChat system under realistic conditions, we used Prometheus as the metrics collector and Grafana for visualization. The following graphs were collected from a local environment using test data. Although the values are not representative of a production system, they serve as a baseline to confirm correct integration and stable behavior of the services.

First Metric

Metric: scrape_duration_seconds

This metric represents the time Prometheus takes to scrape and process metrics from the gatewayservice:8000 via the /metrics endpoint. Values ranged between 0.001 and 0.006 seconds, reflecting very fast and efficient responses. These results are expected in a local environment, where traffic is limited and resource usage is low. It confirms that Prometheus is working correctly and that the Gateway service exposes its metrics reliably. However, values in production may vary due to concurrency, network latency, or server load.

Second Metric

Metric: http_request_duration_seconds_count

This graph shows the cumulative number of HTTP requests made to gatewayservice:8000, grouped by method (GET/POST) and status code. The majority of requests returned status 200, indicating successful interactions with the endpoints. A few 304 responses were also recorded, which are expected when requesting cacheable resources. One 400 response was generated deliberately during testing by submitting invalid login credentials, demonstrating correct error handling by the backend.

Third Metric

Metric: histogram_quantile(0.95, sum by (le) (rate(http_request_duration_seconds_bucket[5m])))

This query calculates the 95th percentile of HTTP request durations, meaning 95% of requests in the last 5 minutes completed faster than the reported value. In this test setup, several peaks were observed, with one reaching up to 0.75 seconds. These peaks coincide with a series of login attempts using various users, which naturally involve more validation and database queries. In contrast, lower and more stable values reflect typical application usage, such as navigating profiles or performing light data fetches.

Fourth Metric

Metric: http_request_duration_seconds_sum (by HTTP method)

This graph represents the total accumulated time (in seconds) taken by gatewayservice to process all HTTP requests during the session, grouped by method. POST requests accumulated over 3.5 seconds, while GET requests barely exceeded 1.2 seconds. This difference is expected since POST requests were directed at more complex operations such as login or data submission, which require more server-side computation. GET requests, used mostly for fetching user profiles or settings, are lighter in processing.

These metrics confirm that the system behaves efficiently under test conditions, with proper request handling and no major performance bottlenecks. The monitoring setup also provides a foundation for future observability in production deployments.

12. Glossary

Term Definition

WIChat

The application providing an interactive Q&A experience.

User

A person interacting with WIChat.

Question

A query generated dynamically using WikiData.

Answer

The correct response to a question in the game.

Hint

AI-generated clues to help users guess the correct answer.

Gemini AI

The LLM service used to generate contextual hints.

WikiData

The external knowledge base used to generate questions.

Gateway Service

The main entry point handling API requests and routing.

User Manager

A component handling authentication, user data management and also tracks user scores and leaderboard.

AuthService

A service responsible for user authentication and security.

UserService

A service managing user-related operations.

LLMService

A service that connects to Gemini AI for hint generation.

QuestionService

A service responsible for retrieving information from WikiData, translating it into generated questions, saving them in a database for later use in the game.

WebApp

The frontend interface used by players to interact with WIChat.

MongoDB

The database system used to store user and game-related data like questions.

Docker

The containerization platform used for deployment and scalability.

JavaScript

The primary programming language used for both frontend and backend development.

React

The JavaScript library used to build the WIChat frontend.

Express

The web framework for Node.js used to build the WIChat backend.

Node.js

The runtime environment for executing JavaScript code server-side.

Java

Programming language used for load tests.

CI/CD

Continuous Integration and Deployment processes for automation and code deployment.