About arc42
arc42, the template for documentation of software and system architecture.
Template Version 8.2 EN. (based upon AsciiDoc version), January 2023
Created, maintained and © by Dr. Peter Hruschka, Dr. Gernot Starke and contributors. See https://arc42.org.
1. Introduction and Goals
The aim of this project is to create a version of the famous quiz show "Saber y Ganar". In the quiz you have to answer different questions about various topics, winning a reward for each correct answer. One of the most relevant requirement is that the questions are generated from WikiData so there will always be different questions.
To do the game we are going to develop a web application that will be available to enter from any device with internet connection.
Regarding quality requirements, the goal is to achieve an optimal level, especially in terms of usability, maintainability, efficiency, and testability.
The main stakeholders in the project are several. Firstly, Professor José Emilio Labra, who teaches the subject. Also, the students and members of the HappySoftware development team. Lastly, potential users of the application will show interest in the project, as their user experience depends on it.
1.1. Requirements Overview
An enumeration of the requirements that the project must meet in terms of functionality can be elaborated:
The application must be accessed through a web frontend. A record of users and their game history will be maintained. Both questions and answers will be generated using data collected from Wikidata, with only one of the four answer options being correct. There will be a countdown to answer each question. Two APIs will exist to access information about both users and generated questions. Reference to the requirements source: Reference to the requirements source
1.2. Quality Goals
Quality goal | Concrete scenario |
---|---|
Usability |
It must be easy to use the app, thus everybody could use it |
Availability |
The system should be available the most time possible |
Testability |
Functionalities must be covered with tests to ensure correct behavior |
Performance |
Using the system must be as smooth as possible. Especially, the question generation must be fast. |
1.3. Stakeholders
Role/Name | Contact | Expectations |
---|---|---|
Happy Software (Dev Team) |
Hugo Méndez Fernández, Pablo Barrero Cruz, Alberto Lago Conde, Pablo García-Ovies Pérez, Samuel Bustamante Larriet, María Teresa González García, Daniel Andina Pailos |
Students are the developers of the app. They need to do a great project to obtain a good mark. |
Happy Software (Investors) |
Owner and the investors. |
They expect the application to work correctly and produce benefits for the company. |
Teachers |
José Emilio Labra |
They will qualify the proyect. |
Users |
Users of the game |
They want to have fun answering questions. It must be intuitive and easy to use. |
RTVE |
Radio y Televisión Española |
They expect it to be a competent application since they are the ones who invested in the project and want to promote it on their different platforms. |
WikiData |
Wikimedia Foundation |
They hope, thanks to the impact of our application, to gain more relevance and visibility in order to attract more users and expand their ways of working. Additionally, their structured data and semantic web also gain relevance. |
2. Architecture Constraints
There are various architectural constraints that affect this application. They have been divided into the following sections.
2.1. Naming Conventions
Constraint | Description |
---|---|
Application name |
The name of the developed application will be WIQ. We have discussed the meaning of these acronyms |
2.2. Application Requirements
Constraint | Description |
---|---|
Theme |
Online question and answer application. It is similar to the "Saber y Ganar" game show. |
Question generation |
Both questions and answers will be automatically generated from Wikidata. |
Question structure |
Each question will have one correct answer and multiple incorrect or distracting answers. There will be a time limit to answer each question. |
Frontend |
The system will have at least one Web frontend deployed. Access will be through the Web. |
User management |
Users can register and log in to play. Registered users can also check their participation history in the system (number of games, correct/incorrect answers, times, etc.). |
API usage |
APIs will be used to access users and generated questions information. |
Docker |
Docker will be used to deploy the application locally and remotely. |
2.3. Documentation
Constraint | Description |
---|---|
Use of Arc42 |
The project will follow the Arc42 documentation standard. |
2.4. Organizational and Versioning Constraints
Constraint | Description |
---|---|
Project organization |
The project is distributed in three established deliveries. Therefore, each module of the project will evolve in several versions, marked by the deliveries. At the end of these deliveries a final presentation will take place. Then, the team will explain the application. |
Git and Github |
The use of Git as a version control system and the Github platform is mandatory. The public repository will be hosted on this platform. |
2.5. Development Team Constraints
Constraint | Description |
---|---|
Technical and theoretical knowledge |
We are not professional developers and have limited experience. Therefore, we will use tools and languages minimally known by some team members. |
Budget |
We will use free tools or services for which the University has a license. |
3. System Scope and Context
3.1. Business Context
-
WIQ: Overview of the whole system. Essentialy, a web application in which users will be able to register/log in, play "Saber y Ganar" and display statistics of their games.
-
Wikidata: Free and open knowledge base that acts as a central storage repository for structured data. Its API will be used to obtain information used in questions and answers of the application.
3.2. Technical Context
3.2.1. System Scope
Other elements of the system which can be looked up in point Point 5: Building Block View are:
-
WIQ Webapp: Module that supports user interaction via UI i.e., the front-end of the whole system.
-
Question Generation Service: Service that will be used internally to manage information retrival from Wikidata.
-
Gateway Service: Express service that is exposed to the public and serves as a proxy to the users management allowing sign up and log in.
-
User service: Express service that handles the insertion of new users in the system.
-
Auth service: Express service that handles the authentication of users.
4. Solution Strategy
4.1. Technology decisions
To develop the app we will use the following technologies:
-
JavaScript will be the main programming language
-
ReactJS to build the user interface
-
Docker Compose to deploy all the microservices
-
GitHub for version control
-
WikiData API to obtain question and answer information
-
ExpressJS to build the backend
We have considered the trade-offs that belong to each technology, such as SpringBoot or PHP for the backend of the app. However, JavaScript was the language that adapted better to our requirements due to the simplicity of the language and its focus on agile development that can lead to faster development cycles. One of the main disadvantages is that we had to learn it, because our main language is Java.
4.2. Implementation design
4.2.1. Question generation strategy
For the question generation process, we consult Wikidata using one of the question generation structures available in a JSON, and enrich it usually with properties more likely to be succesfuly queried from Wikidata.
The JSON structure allows us to select the desired categories for the questions requested if necessary, and, in the future, the language of the question. From there, we construct the query in Wikidata and convert it into a question object. Here’s an example of a category element in the JSON:
[
{
"name": "country",
"instance": "Q6256",
"properties": [
{
"property": "P36",
"template": {
"es": "Cuál es la capital de x",
"en": "What is the capital of x",
"fr": "Quelle est la capitale de x"
},
"category": ["Geography", "Cities"]
},
{
"property": "P38",
"template": {
"es": "Que moneda tiene x",
"en": "What currency x has",
"fr": "Quelle est la devise de x"
},
"category": ["Political"]
}, ...
], ...
}, ...
]
This structure allows us to select multiple and varying questions from a given category or for a given item, varying the properties depending on the type of element we are querying within a category (i.e., a Wikidata Item).
This process allows us to dynamically generate questions based on specific categories and properties, ensuring a diverse set of questions for users across different topics and languages.
4.2.2. Question service functionality
For the question service implementation, as explained in ADR 08 - Questions Database Functioning, we’ve devised a strategy to ensure a seamless experience with minimal question repetition.
The question request route will manage its service by removing questions from the database, preventing redundancy. We’ve established two thresholds for the questions stored count:
-
A high threshold to maintain a minimum number of stored questions
-
A low threshold to avoid depleting the collection entirely.
In exceptional cases where no questions are available, a pair of questions will be synchronously generated before asynchronously replenishing to reach the high threshold, which will be the usual behavior whenever the high threshold is surpassed.
This approach ensures continuous question availability while mitigating repetition risks. Additionally, we aim to explore the feasibility of generating questions during periods of service inactivity or low request volume for further optimization.
4.3. Decisions about the top-level decomposition of the system
We decided to use a microservices arquitecture, having different modules for each functionality. For example, we will use a microservice to generate the questions.
4.4. Decisions on how to achieve key quality goals
Quality goals are explained in detail in point 10.
Quality goal | Decisions to achieve it. |
---|---|
Usability |
We are going to use real users to test the app interface and improve it according to their feedback. |
Availability |
Docker Compose will be helpful to avoid problems with the deploy of the app. In addition we will use web hosting to expose it to the internet. |
Testability |
We created unit and e2e (integration) test set to test the application |
Performance |
We will use the minimum required calls to the APIs to mantain the minimum time response, for example, with bulk requests. |
4.5. Relevant organizational decisions
Our framework will be based on working every week with meetings when necessary, one will be held always during lab time in order to assign tasks and make minor decisions. On the other hand, further meetings will be intended for more thorough reviews as well as more significant decisions.
Each assigned task will be created as an Issue in GitHub to track the progress done. In addition, we are going to use GitHub Projects to organize the workflow of the team. To merge the code to the develop branch we are going to use Pull Requests in order to be approved by every person of the team.
5. Building Block View
The building block view presents, in a graphical manner, a decomposition of the most important parts of the system.
5.1. Whitebox Overall System
Main view of the system. WIQ application is related to one external component: the Wikidata API
- Motivation
-
This is a general overview of the application.
- Contained Building Blocks
-
-
Wikidata Infinite Quest: It is the main application, represented as a blackbox that will be detailed in the following decompositions.
-
Wikidata API: It is the external API that the system uses to generate questions and answers.
-
5.2. Level 1
5.2.1. White Box Wikidata Infinite Quest
- Motivation
-
First decomposition of the system.
- Contained Building Blocks
-
-
webapp: It is the main module of the application.
-
gateway: Handles the communication between the user service and question service modules with the web app service. Is is the API REST.
-
questions: Gets questions from Wikidata and handles their loading into the database.
-
users: Handles the user management.
-
multiplayer: Handles the multiplayer management.
-
MongoDB: MongoDB database.
-
MariaDB: MaiaDB database.
-
- Other Important Interfaces
-
-
Docs: Contains the application documentation.
-
5.3. Level 2
5.3.1. White Box users
- Motivation
-
Decomposition of the users black box from level 1 system.
- Contained Building Blocks
-
-
Routes: Contains route handlers for the users.
-
Services: Contains data logic.
-
- Other Important Interfaces
-
-
index: Define the entry point of the User Service.
-
5.3.2. White Box questions
- Motivation
-
Decomposition of the questions black box from level 1 system.
- Contained Building Blocks
-
-
Routes: Contains route handlers for the questions.
-
Services: Contains data logic.
-
- Other Important Interfaces
-
-
index: Define the entry point of the questions.
-
utils: Define auxiliar functions and questions structure.
-
5.3.3. White Box Web App
- Motivation
-
Decomposition of the webapp black box from level 1 system.
- Contained Building Blocks
-
-
public: Contains image and audio files.
-
src: Contains the components, pages and data of the front-end application.
-
5.3.4. White Box gateway
- Motivation
-
Decomposition of the gateway black box from level 1 system.
- Contained Building Blocks
-
-
gateway-service: Define the routes for handling the communication between the user service and question service modules with the web app service.
-
prometheus: Contains the configuration of grapfana and prometheus
-
- Other Important Interfaces
-
-
monitoring: Uses Grafana and Prometheus to monitor the application.
-
5.3.5. White Box multiplayer
- Motivation
-
Decomposition of the multiplayer black box from level 1 system.
- Contained Building Blocks
-
-
index: Handles the multiplayer management.
-
5.4. Level 3
5.4.1. White Box routes from users
- Motivation
-
Decomposition of the black box routes from users white box from level 2 system.
- Contained Building Blocks
-
-
user-routes: Contains route handlers for the register, ranking, groups management, statistics management and questions record management.
-
auth-routes: Contains route handlers for the login.
-
5.4.2. White Box services from users
- Motivation
-
Decomposition of the black box services from users white box from level 2 system.
- Contained Building Blocks
-
-
user-model: Define the User, Statistics and Group database schemas.
-
5.4.3. White Box routes from questions
- Motivation
-
Decomposition of the black box routes from questions white box from level 2 system.
- Contained Building Blocks
-
-
question-routes: Contains route handlers for the questions management.
-
5.4.4. White Box services from questions
- Motivation
-
Decomposition of the black box routes from questions white box from level 2 system.
- Contained Building Blocks
-
-
question-data-model: Define the Question database schema.
-
question-data-service: Responsible for managing questions in the database.
-
wikidata-service: Responsible for getting questions from Wikidata.
-
5.4.5. White Box src from webapp
- Motivation
-
Decomposition of the black box src from webapp white box from level 2 system.
- Contained Building Blocks
-
-
components: Defines common elements in the pages like the nav-bar, footer, etc.
-
pages: Defines the different screens of the application.
-
data: It contains the data used by the pages.
-
App: Main entry point for the application logic. Defines the application’s theme and navbar routes.
-
index: Initializes the application and renders the main component (App.js) to the DOM.
-
6. Runtime View
In this Runtime View section, some sequence diagrams of different interactions with the system will be shown.
6.1. Register
6.2. Login
6.3. See User Statistics
6.4. See Games Instrucions
6.5. See Users and Groups Ranking
6.6. Groups
6.6.1. Group List and Creation
6.6.2. Group Joining
6.6.3. Group Exiting/Deletion
6.6.4. Group Details
6.7. Play Games
6.8. See and edit your profile
7. Deployment View
We have several services deployed in a single virtual machine using containers and Docker Compose, this eases the deployment. This are the different container and their relations:
-
WebApp: The web page. Gets data from Gateway Service.
-
Gateway Service: data access interface for services.
-
Users: manages authentication and statistics about users.
-
MariaDB: persistance system used in the users data.
-
MongoDB: persistance system used in the questions data.
-
Grafana and Prometeus: code monitoring systems.
-
Questions: generates questions to use in the game.
-
Multiplayer: permits users to play multiplayer games.
We are going to use an Azure VM to deploy all this services.
8. Cross-cutting Concepts
Some important concepts need to be taken into account so as to a better understanding of the application. These concepts have to do with the following categories.
-
Domain concepts
-
User Experience (UX)
-
Operation Concepts
-
Architecture and Design Patterns
-
Development Concepts
Next, each category will be detailed.
8.1. Domain concepts
At the moment, the application follows this schema:
-
User: it is the person that uses the application. There can be multiple Users at the same time using the app.
-
Contest: The contest is the part that the User can see. It contains everything the user can do, such as play games, be part of groups or look for rankings and statistics.
-
Game: The User can play different games, including The Challenge, Wise Men Stack or even a multiplayer mode, which enables various Users to play together. Games consist of several questions to which users have to answer. Down below there is an schema that show the different game modes that are available.
-
Question: Each question has different answers but only one of them is correct. Answering correctly to the questions rewards users with points.
-
Statistics: Each user has statistics that show different aspects of their profile, such as the time they invested on each game mode, correct and incorrect questions, etc.
-
Profile: User’s profile has data such as their username and the amount on points they have earned. They can also choose a profile picture from between some given avatars.
-
Group: Users are able to join or create groups, that way the can get to a groups ranking.
-
Wise Men Stack: The player chooses a topic from the available options and must answer a battery of questions related to it within 60 seconds. For each question, the host provides two options. If the contestant guesses correctly, they move on to the next question.
-
Warm Question: It consists of some topics of varied themes. For each correct answer, €100 is earned, and €10 are lost if the contestant passes, does not respond, or answers incorrectly.
-
Discovering Cities: The contestant will face a challenge where they will be repeatedly asked questions referring to different cities around the world. To successfully overcome the challenge, the contestant must answer as many questions as possible correctly throughout the test. Time and number of questions are fixed.
-
The Challenge: It is the quintessential game mode, as it allows you to customize the match to your liking. This game mode is tailored for those who wish to practice certain game formats before engaging in our various other game modes. Number of questions, time per question and category can be set.
-
Multiplayer: Create a room and share the room code with other players to play. It also has a room chat.
8.2. User Experience (UX)
-
Frontend: the frontend of this application consists of a deployed web app which is deployed. The user can register or log in with accounts already created on an intuitive page. They can also play different game modes and to consult their historial record, statistics, and even some rankings. As it can be seen down below, the homepage is bright and appealing, which leads to a better user experience. Users can easily choose the game mode they prefer and play.
-
Internationalization The application is available in various languages, including English as the main language. This would provide a better user experience as users could better tailor the application to their personal preferences.
8.3. Operation Concepts
-
Usability: We tried for the application to be easy to use. For this reason, we had some people try our application. This way we can know its strengths and weaknesses and improve them. Usability affects User Experience as well, so it is an important aspect of the application. Up to this moment, usability testing has helped with the color palette chosen the application.
We have also taken into account certain aspects that could difficult a person to use our application properly. For instance, we have stablished tics and crosses as well as colours to know if questions are correct or not. For this reason, a daltonic person would get to know if they got the answer correct or not easily.
8.4. Security
We have implemented some security in the application. We have blocked access to certain directions if you are not logged in. This way, we avoid external people to be able to access our application as it could lead to other security issues. We have also stablished that passwords need to follow a certain security level. They need to be at least 8 characters longs and they must contain upper and lower letters, numbers and special characters. Also, passwords are stored encrypted. In case that the batabase is stolen, data would still be secure.
8.5. Architecture and Design Patterns
-
Microservice: In this application there are some microservices such as the User Management, which involves signing up, logging in and everything related to the points and timing of the user. Microservices provide an easy way of creating a complex application composed by independent systems. Another important microservice is the questions generation system. It creates infinite questions related to varios topics. Thats to this, users can never get bored of the game, as questions do not repeat themselves. The webapp microservice includes everything related to the graphic interface. Users are able to communicate with the application thats to this service.
All of the architectural decisions that have taken place through the application creation are specified in the repository Wiki section
8.6. Development Concepts
-
Testing: Numerous use-cases are studied so as to provide a solid and easy-to-use application. There are unitary tests related to every functionality of the project, as well as e2e tests regarding the main game.
-
CI/CD: The application is in continuous integration and deployment. Team members commit frequently into the repository where the project is stored. This makes it easier when assembling project parts involving collaboration from different team members.
9. Architecture Decisions
The architectural decisions are completely documented in our repository Wiki section. Henceforth, to avoid redundancy, instead of re-document those decisions here, we will refer to them.
9.1. Team Working Methodology
10. Quality Requirements
The main quality goals are:
-
Usability: The interface should be intuitive, with clear instructions and an accessible design. This will allow users of all abilities to navigate and use the application effortlessly. It must mantain usability in mobile devices.
-
Availability: The system should aim for maximum availability, ensuring it is accessible around the clock. This will guarantee uninterrupted access most of the time, regardless of their time zone or schedule.
-
Testability: The code must be tested and should be easy to test (for instance with datatest-id to reference elements). The tests ease the implementation of new features because they make sure old functionality never stops working.
-
Performance: Efficiency is a priority in system usage, particularly in quick question generation, ensuring a good experience for users.
10.1. Quality Tree
10.2. Quality Scenarios
Usage Scenario table:
Usage Scenario | System Reaction |
---|---|
The user initiates the web application, enters their username and password, and clicks the login button. |
The system verifies the information entered by the user, and if correct, redirects them to the main page; otherwise, it indicates an error has occurred. |
The user surveys the main window, where several buttons with different options appear. |
In response to pressing each of these buttons, the system will display the corresponding content. |
The user starts the game and is awaiting the questions. |
The system swiftly generates the question and its possible answers. |
The user loses the game and decides to stop playing for a while. Five hours later, they decide to play again. |
The system remains active and functions correctly. |
Change Scenario table:
Change Scenario | System Reaction |
---|---|
Adding an additional login system to access the account not only through username but also through the email. |
The system should be capable of adapting to provide this functionality without affecting the existing ones. The tests verify that the old login is not affected. |
Adding a new game mode or functionality. |
When adding a new feature, the application’s usage methodology should not be distorted, ensuring it can still be used in the same manner. |
Adding a new game language. |
When adding a new game language, the system should continue to function smoothly. |
11. Risks and Technical Debts
11.1. Risks
To assess the relevance level of the following risks, we will use number 1 to indicate low relevance, 2 for medium relevance, and 3 for high relevance.
Risk | Relevance | Considerations |
---|---|---|
Limited knowledge of certain tools or languages |
2 |
A solution could be to use the tools and languages that are most well-known to the team members. Also, each member should try to learn those aspects they know less about. |
The team has not worked together before |
1 |
A suggestion could be to mantain a good communication and inform about any aspect that could affect others. |
Being a big group |
1 |
Being many members can difficult the communication. However, if the previous suggestions are followed there should not be any problem. |
11.2. Technical debts
Technical Debt | Considerations |
---|---|
Low-quality code |
The use of new technologies and languages can lead to poorly written or poorly designed code. To address this issue, we will use pull requests to ensure that the code is reviewed by multiple team members. |
Deployment issues |
Having not worked with Docker and other deployment tools before may cause problems when deploying the application. For this reason, we are trying to put our best into learning these new technologies. |
Dependency with Wikidata |
It is a requirement so we need to depend on it. However, we have created a questions database so in case that Wikidata was not working, the application would continue to work for some time. |
Filtering questions and answers |
Given the structure of Wikidata there are sometimes where questions and answers do not have the proper label we are looking for. Although we have stablished filters and different strategies, sometimes it fails. |
Changes in database model |
Changing the model of a relational database makes it necessary to create a new database, losing all of the data we had before. |
Game duplication code |
Due to our little knowledge about JavaScript, we have not found a way of optimizing code from different game modes. It would be necessary to search for another way of doing it, as this is not maintenable. |
Non-expiring session token |
Once someone has logged in in the application, session token does not expire. That way session does not finish unless you explicitly log out. |
12. Glossary
In this section we will present, define and translate some concepts that we consider relevant to know when facing our application.
12.1. Acronyms
Acronym | Term | Definition |
---|---|---|
ADR |
Arquitectural Design Record |
Document that describes a choice the team makes about a significant aspect of the software architecture they’re planning to build. Each ADR describes the architectural decision, its context, and its consequences, and its goal is to ensure that the proposed design meets functional, aesthetic, regulatory, and safety requirements before proceeding further with the project. |
API |
Application Programming Interface |
Set of rules, protocols, and tools that allows different software applications to communicate and interact with each other. It defines the methods and data formats that developers can use to request and exchange information between different software components. |
CI/CD |
Continuous Integration & Continuous Delivery |
CI refers to the practice of automatically and frequently integrating code changes into a shared source code repository; CD is a 2 part process that refers to the integration, testing, and delivery of code changes. Continuous delivery stops short of automatic production deployment, while continuous deployment automatically releases the updates into the production environment. |
WIQ |
Web application’s name, where the users can register and login themselves to play different type of rounds. |
12.2. Domain Specific Terms
Term | Definition | ES Translation |
---|---|---|
Discovering cities |
Test in which contestants receive clues about a specific city and must guess which city it is; this clues may include descriptions of geographical features, famous monuments, historical or cultural events, among other aspects related to the city in question. |
Descubriendo ciudades |
HappySw |
Name of the fictitious company under which the members of the group will simulate that we have been hired to develop the application. |
N/A |
Know & Win |
Popular Spanish television program that combines quiz shows with educational entertainment aired daily on La 2 of Televisión Española. The show is known for its unique format, which includes a variety of challenges and tests where contestants demonstrate their knowledge in different areas such as history, geography, science, popular culture, literature, art, among other subjects. |
"Saber Y Ganar" |
Player |
User who can register and then login into the app to play some of the different quizes explained around this explanation. |
Jugador |
The Challenge |
Test where contestants must face a series of questions or activities that test their knowledge and skills in a specific area, such as general culture, history, science, art, among other topics. This test may consist of answering multiple-choice questions, completing sentences, identifying images, or performing activities related to the subject matter. |
El desafío |
Warm question |
Test in which contestants' aim is to answer questions rapidly and accurately, requiring quick thinking, as questions are presented rapidly without pauses between them. Contestants strive to provide correct answers to accumulate points, but they must also carefully assess the risk of answering incorrectly, which could lead to losing points. |
Pregunta caliente |
Wise Men Stack |
Test in which questions are presented on a wide range of topics spanning from literature and history to science and popular culture. Contestants must answer as many questions correctly as possible within a limited time frame. |
Batería de sabios |
12.3. Technical Terms
Term | Definition | ES Translation |
---|---|---|
Arc42 |
Set of recommendations for documenting and designing software architectures, particularly for software-intensive systems, that provides a template for architecture documentation structured into various sections covering different aspects of the architecture and aiming to promote clear communication and understanding of this one among stakeholders. |
N/A |
Backend |
Server-side of a software application or website. It encompasses everything that users don’t see directly, such as databases, servers, and application logic. The backend is responsible for processing requests from the frontend and generating the appropriate responses. |
N/A |
Container |
Lightweight, portable, and self-contained unit that packages together all the necessary software components, such as code, runtime, libraries, and dependencies, needed to run an application. Containers provide a consistent environment for running applications across different computing environments. |
Contenedor |
Frontend |
Part of a software application or website that users interact with directly. It encompasses the user interface (UI) and user experience (UX) components that users see and interact with in their web browsers or on their devices. This includes elements such as buttons, forms, menus, and any visual or interactive elements users interact with to use the application. |
N/A |
Git |
Free and open-source version control system used for tracking changes in source code during software development. It allows multiple developers to collaborate on projects simultaneously and efficiently manage changes to the codebase. |
N/A |
Wikidata |
Free and open knowledge base that acts as a central storage repository for structured data from Wikimedia projects and beyond. It provides a common platform for collecting and sharing structured data about various topics, including but not limited to, people, places, events, concepts, and objects. |
N/A |
13. Appendix I: Load tests
We conducted load tests on our application using Gatling. This type of testing allows us to know how strong our application is in relation with the amount of users that interact with it at the same time. Initially, we recorded the specific functionalities we intended to test, and then we configured the tests accordingly. Our primary focus was on testing the game component, as it constitutes the core aspect of our application. After recording the functionality to be tested, we increased the number of requests to 1,000 and established that these requests be made gradually, simulating a real-life scenario.
After setting this, me executed the load tests, obtaining results that were not too bad. However, as it is shown in the next picture, more than 25% of the requests failed. This means that there is a possibility that the game fails when playing, which is not acceptable in an application of this kind. Even most of the requests have a response we will try to reduce the amount of failed requests. In addition, instant response of petitions has not the highest priority for us. It is more important that as many requests as possible and responded correctly.
13.1. Test 1: 1000 users with poor question generation algorithm
After this load test, we tried to improve the question generation so as to avoid the failures mentioned above. We tested our application again, obtaining new results. We believe it is important to mention that even if the settings where the same in both tests, the application had more game modalities and new functionalities, which may affect to the number of requests and the time needed for each one.
13.2. Test 2: 1000 users with new question generation algorithm
As it can be seen in the picture above, the results have changed noticeably. From our point of view, there are two main aspects which seem remarkable. On one hand, the drastic decrease in the number of failed requests. The failed requests have decreased to 2% compared to 27% in the first test. This demonstrates that the changes made to the application have achieved their objective. On the other hand, the overall increase in the time it takes to respond to a request catches the eye as well. Nevertheless, given that the difference in time is milliseconds and it is not a real-time critical application, we consider that the objective of these tests has been fulfilled.
13.3. Users distribution along the simulation and response time distribution
We think that it is useful to compare some of the statistics that these tests have providad us with. For instance, seeing Users distribution along the simulation graphic allows us to see if the stablished settings get to a gradual users interaction with the application. Also, the Response time distribution graphic is very visual so as to see the average time requests take to respond.
13.3.1. Test 1
13.3.2. Test 2
13.4. Number of responses per second
Finally, we would like to compare another graph because we believe it illustrates the number of responses per second in a straightforward manner during the tests. This allows us to observe the percentage of failed requests each second. As mentioned earlier, our primary goal after the initial test was to reduce the number of failed requests, even if it meant slightly increasing the average response time for each request.
13.4.1. Test 1
13.4.2. Test 2
As we can see, the second test shows a much more equilibrated graphic. Responses are distributed better in time and failures are a minimum percentage of the total responses.
For these reasons, the load tests have motivated us to develop a more stable question generation algorithm. This reduces the likelihood of requests failing when users are interacting with our application. This ultimately leads to a better user experience, which is a crucial aspect of application development.
14. Appendix II: Other tests
14.1. Unitary tests
We did unitary tests through the whole application. These tests were useful when getting to know if what we had just implemented worked properly or not. It is also a very esasy way of checking if you changed something without noticing. For example, if anything were modified, unitary tests where adapted to this new functionality. However, the changes made in the application may affect some parts that we were not expecting to be affected. This way we could be able to garantee that the application continued to work properly and to check if there were some parts that depended on others when they should not. For these reasons, it is very important to have as much as possible of our code covered. We have achieved a coverage percentage greater than 80%.
14.1.1. SonarCloud
The picture below shows an overview done with SonarCloud to our repository. As it can be seen, all of the diferent services reach 80% or more, some of them reaaching almost 90%. The total coverage of our project is around 82%. However, it is important to remember that coverage is not only about numbers but about testing projects in a good way.
We would also like to mention that SonarCloud offers a graphic where risks in different part of the code are displayed. Bubbles in the top right side of the graphic means that the longer-term health may be at risk. Green bubbles at the bottom-left are best. Down below there is the graphic of our project, which shows all bubbles in green and most of them are in the left-bottom part.
14.2. Acceptance tests
Acceptance tests are also important. They do not focus on the functionality of the application but on the user experience. This way we were able to know easily if elements where rendered quickly. In addition, this tests let us measure how long it takes for an interaction with the application. For this tests we used a MariaDB database created only for these tests. We used MariaDB because we needed to get information from users, which are stored in this type of relational database.
We focused in testing the different games available, as it is the core part of the application. It is important to mention that e2e tests can be executed two ways. The first way, using a graphic interface, which easier for the developer. We are able to see how the tests are executing so we may see some issues that otherwise we would not be able to detect. The second way is without graphic interface. It is done this way when e2e tests are executed through Github actions in deployment, not locally. It is sometimes more difficult to detect the issues but there is still a great overview of the tests execution which helps to detect problems.
14.3. Usability tests
One of our quality goals is 'Usability', we always have it in mind when we are developing new features. However, we need to check if we are doing it good. To test the usability of the application we made some rudimentary usability tests.
We got three users to test the application in different stages of the development, helping us to change things that we did not notice when developing it:
For instance, one user told us that our 'Play' button in home page always redirected to login, even if you were logged in. It caused confusion to the users so we changed it to redirect to game selection page if logged in.
Other usability test was made with the Android application. The user noticed that some times the nav bar had a strange behaviour that messed up the entire interface. We adapted to the comments made by adapting the navbar.
15. Appendix III: Application monitoring
Monitoring an application is a crucial part of it. It is an easy way of knowing how well a web application is working through different graphics and metrics. For this, we adapted the monitoring system that was given to us and we personalized it. That means we are using Prometheus as well as Grafana to monitorize our project. Prometheus intercepts every request that reaches our application’s gateway. Grafana takes those data at paints them in easy-understanding graphics. We have a dashboard in Grafana to display some aspects that we consider relevant.
The dashboard that we are using to monitorize our project is called Wiq_es04 Dashboard and it has 3 different panels. The first of them shows the number of requests through time. The second one shows requests that accessed pages that were not found. The third one paints a graphic of the average time each request takes.
We would like to mention that this monitorization is set to be available both in production and deployment environments. For this reason, our Grafana dashboard is accesible .
To see how Grafana works we have used Apache. We have stablished the number of requests and how fast we want them to execute. This lets us check easily how our project treats requests. Also, another interesting thing to mention are the metrics . It shows every different request on the application and its status code.