1. Introduction and Goals
The STAP (Smarter Than A Penguin) web application is developed for RTVE to create an experimental version of the 'Saber y Ganar' quiz show. The primary goal of STAP is to provide users with an engaging platform where they can participate in quiz games, answer questions generated from Wikidata, and win prizes.
This document outlines the essential requirements guiding the software architects and development team in creating STAP.
1.1. Requirements Overview
The system aims to fulfill the following essential requirements:
-
Users can register and login to participate in quiz games.
-
Questions are automatically generated from data available in Wikidata.
-
Users receive historical data of their participation, including the number of games played, questions passed and failed, and timestamps.
-
Each question must be answered within a specific time limit.
-
Questions consist of one correct answer and several distractors, all automatically generated.
-
Access to user information and generated questions is available through an API.
1.2. Quality Goals
Quality Goal |
Description |
Reliability |
Ensure consistent and accurate question generation and user data management. |
Performance |
Optimize system response times and capacity to handle multiple user interactions simultaneously. |
Security |
Implement robust security measures to protect user data and prevent unauthorized access. |
Usability |
Provide an intuitive and user-friendly interface to enhance user experience. |
Portability |
Enable seamless deployment and operation across different environments and platforms. |
Testability |
Facilitate comprehensive testing to ensure software correctness and identify potential issues early. |
Availability |
Ensure system uptime and accessibility to meet user demands and minimize downtime. |
1.3. Stakeholders
Role/Name | Contact | Expectations |
---|---|---|
Users |
N/A |
Intuitive and enjoyable quiz experience |
Professors |
Pablo González (gonzalezgpablo@uniovi.es) |
The well-designed web application that fulfills the requirements |
RTVE |
Reliable and engaging platform for users |
|
Development |
Sergio Truébano Robles (uo289930@uniovi.es) |
Clear documentation and reliable, performant and available system |
2. Architecture Constraints
When designing the STAP application, there are several constraints that must be taken into consideration, as they will have a significant impact on the overall design of the application and the architectural decisions. These constraints must be considered in order to ensure that the final product meets the needs and expectations of the users and stakeholders. The following table summarizes these constraints and provides a brief explanation for each one divided into technical, organizational and political constraints.
2.1. Technical constraints
Constraint | Explanation |
---|---|
WikiData |
Our application must generate questions automatically getting data from WikiData |
Version control and monitoring (GitHub) |
For the STAP application, GitHub is a useful tool for version control and collaboration among the team members working on the project. It allows easier coordination and organization of the development process, as well as keeping track of changes and contributions made by each team member. |
User Experience |
The design of the application must make its use friendly and easy |
Test coverage |
Code must meet a good test quality and coverage to ensure the expected outcome. |
2.2. Organizational constraints
Constraint | Explanation |
---|---|
Team |
The project will be done in a team composed of 7 students, so work must be assigned accordingly. |
Git-based development |
The project will be built around the Git workflow, so all tools used must be able to closely interact with this system. |
Meetings |
The project’s development process must be reflected in the minutes of each meeting that happens. |
Delivery deadlines |
There are 4 deliverables every 3 weeks that should be followed accordingly before the deployment of the application |
2.3. Political constraints
Constraint | Explanation |
---|---|
Documentation |
We are going to use AsciiDoc and follow the Arc42 template. |
Language |
The documentation and application will be developed in English. |
3. System Scope and Context
3.1. Business Context
-
Player (user): The user interacts with the STAP web application using the front-end of the application.
-
STAP System (core system): System that allows the players to play question games based on information from the wikidata api
-
Wikidata API (external system): Api which exposes the information stored in wikidata database
3.2. Technical Context
Component | Technologies Used |
---|---|
Front-end |
HTML, CSS (Tailwind), JavaScript (React) |
Backend |
node.js (Express), Wikidata’s API |
Database |
MongoDB |
Arquitechture |
Microservices |
Deployment and Maintenance |
Docker |
Component | Functionality |
---|---|
Front-end |
User interaction and results display. |
Backend |
Logical processing, communication with external API and database. |
Database |
Data storage. |
External API |
Data query from Wikidata. |
In this flow: - The user interacts with the user interface (front-end) through clicks and responses. - The Backend processes the requests, consults the Wikidata API, and updates the screen. - The channels are the HTTP connections between the components. - The mapping evaluates the user’s responses in real time to provide an appropriate response.
4. Solution Strategy
This section will cover all the technological, architectural, design and organizational decisions made along the project for its appropiate development
4.1. Technologies
-
React: JavaScript library for web and native user interfaces. It allows developers to create interactive web applications by breaking down the UI into reusable components. React uses a declarative approach to efficiently update and render components, resulting in faster and more maintainable code. It’s widely adopted in the industry due to its simplicity, performance, and robustness.
-
Node.js: JavaScript runtime that enables running JavaScript code outside of web browsers. It’s renowned for its event-driven architecture and extensive collection of packages, making it ideal for building scalable server-side applications.
-
Express.js: Express.js, often simply called Express, is a minimalist web application framework for Node.js. It simplifies the process of building web applications by providing a robust set of features, including middleware support, routing, and templating engines. Express is known for its flexibility, simplicity, and performance, making it a popular choice for developing web applications and APIs in Node.js.
-
-
Wikidata: Wikidata provides a REST API for retrieving information related to any topic. It helps us to dynamically generate questions for our game using it from any programming language.
-
MongoDB: popular NoSQL database known for its flexibility and scalability. It stores data in flexible JSON-like documents and is widely used in modern web development for its simplicity and ability to handle large volumes of data.
-
SonarCloud: Cloud-based service provided by SonarSource, which offers continuous code quality analysis and automated code reviews for software development projects. It helps developers identify and fix bugs, security vulnerabilities, and code smells in their codebase to improve overall software quality.
-
Arc42: framework (template) used for documenting and communicating software architectures. It provides a template for describing the architecture of a software system, covering aspects such as stakeholders, requirements, architecture decisions, components, interfaces, and quality attributes. arc42 helps teams create consistent and comprehensible architecture documentation, enabling better communication, understanding, and maintenance of software systems throughout their lifecycle.
-
npm: default package manager for Node.js, providing a command-line interface to install, manage, and publish JavaScript packages. With over a million packages available in its registry, npm simplifies adding functionality to Node.js projects by handling dependencies and providing tools for versioning and publishing packages.
-
Docker: platform that will be used for deploying our services inside containers. Containers are lightweight, portable, and self-sufficient units that contain everything needed to run an application, including the code, runtime, system tools, libraries, and settings. Docker enables developers to package their applications along with all dependencies into containers, ensuring consistency across different environments, such as development, testing, and production.
-
GitHub Actions: built-in automation tool on GitHub that allows us to automate some workflows that are triggered after some specific github branches actions at development. It provides as continuous integration of the game functionality.
-
Gatling: Load test tool that allows us to record some user interaction from our application and simulate it as if various differnet users were accessing the application.
-
Prometheus: monitoring and alerting toolkit designed for reliability and scalability. It collects metrics from configured targets at specified intervals, stores them efficiently, and provides a powerful query language for analyzing and alerting on these metrics. It’s particularly well-suited for dynamic environments like cloud-native applications and microservices architectures.
-
Grafana: open-source platform for monitoring and observability, providing customizable dashboards and visualization tools for analyzing metrics, logs, and other data sources. It allows users to create dynamic, interactive dashboards to monitor the health and performance of their systems and applications.
-
Azure: Cloud computing service used for creating virtual machines and running Docker containers. Azure provides a scalable and flexible infrastructure for hosting our microservices-based application, ensuring high availability and reliability.
-
GitHub: Version control and project management platform used for managing our game project. GitHub provides features for collaboration, issue tracking, and code review, facilitating efficient development workflows and team communication.
-
Tailwind CSS: Utility-first CSS framework for creating custom designs without having to write CSS from scratch. Tailwind CSS offers a set of pre-defined utility classes that can be applied directly in HTML markup, enabling rapid development and consistent styling across the application.
4.2. Technological decisions
At the beggining of the project, the team decided to develop the wikidata API by means of .NET technology and C# programming language. As part of continuous integration, the application was attempted to be deployed without success due to docker issues with .NET container. Therefore, the team decided to migrate the whole API to Node.js using javascript and express framework. As a conclusion, it was worth spending time making the migration for reducing the number of potential issues at deployment time.
4.3. Solution strategy in context with quality attributes
Quality goal | Scenario | Solution approach | Link to Details |
---|---|---|---|
Reliability |
Ensure system stability even under high loads or failure scenarios |
Perform load test and asses the system reliability as well as providing the user with correct and consistent error messages when needed |
Development concepts section inside Cross-cutting Concepts |
Performance |
Maintain fast response times even under heavy usage |
Retrieve wikidata information before hand for giving quick response times and perform load test and asses the system reliability |
<<>> |
Security |
Protect sensitive data and prevent unauthorized access |
Implementing encryption and a logging system |
User’s login inside Runtime View |
Usability |
Ensure the system is intuitive and easy to use |
Conducting user testing, improving user interface design |
Usability tests inside Cross-cutting Concepts |
Portability |
Enable the system to run across different platforms |
Using docker containerization, adhering to standards |
|
Testability |
Facilitate thorough testing and validation of system functionality |
Implementing automated testing frameworks, ensuring code coverage |
Testing inside Cross-cutting Concepts |
Availability |
Ensure the system is accessible and operational when needed |
Implementing monitoring, proactive maintenance, and disaster recovery plans |
Monitoring with Grafana inside Cross-cutting Concepts |
4.4. Architecture & Design
-
Microservices: Our game is built using a microservices architecture, which structures the application as a collection of loosely coupled services. Each service encapsulates a specific functionality or business capability, allowing for independent development, deployment, and scaling. By adopting microservices, we promote modularity and flexibility, enabling rapid iteration and innovation.
-
Containerization with Docker: We leverage Docker containerization to package each microservice and its dependencies into lightweight, portable containers. Docker provides a consistent environment across different stages of the development lifecycle, ensuring seamless deployment and scalability. With Docker, we can easily spin up new instances of services, manage dependencies, and streamline our development and deployment workflows.
-
API Gateway: We employ an API gateway as a centralized entry point for all client requests to our microservices. The API gateway serves as a reverse proxy, routing incoming requests to the appropriate microservice based on predefined rules and policies. It provides a unified interface for clients to interact with our system, abstracting away the complexities of the underlying microservices architecture. By consolidating access through the API gateway, we enhance security, governance, and performance while simplifying client interactions.
-
Scalability and Elasticity: With our microservices architecture orchestrated with Docker, we achieve horizontal scalability and elasticity to handle fluctuations in traffic and workload. Docker’s container-based approach enables us to dynamically scale individual services based on demand, ensuring optimal resource utilization and cost efficiency. Combined with automated scaling policies and monitoring, we maintain responsiveness and availability during peak usage periods.
-
Observability and Monitoring: We prioritize observability and monitoring in our architecture to gain insights into the performance, health, and behavior of our microservices. Leveraging tools such as Prometheus, Grafana, and ELK stack, we collect metrics, logs, and traces from across our infrastructure, allowing us to detect anomalies, troubleshoot issues, and optimize system performance. With comprehensive observability, we ensure reliability, maintainability, and continuous improvement of our game platform.
4.5. Team Organization
For developing this project we are using Github as the control version systems. The master branch contains the final version of the product, so that every accepted pull request to master branch will be considered as a new release. The production branch contains the work in production right now, from where everybody should create their own branch for their specific code development.
-
Documentation: it must be always updated for making our work valuable and consistent.
-
Weekly meetings: Weekly discussions about what has been done and what needs to be done will be key for our team success.
-
Github: this control version systems not only allows us to share and collabortively write code, but also provides other resources such as issues and project management (kanban board) tools for organizing the work to be done. Also, wiki section allows us to save all of our minutes from each scheduled meeting.
-
Whatsapp: will allow us to be in constant communication for helping each other out whenever needed.
-
Discord: useful for making unofficial meetings and making decisions whenever is impossible for all of us to be present in an specific place.
5. Building Block View
5.1. Whitebox Overall System
- Motivation
-
This is a basic introduction to the app, highlighting the external services it uses and how they work together.
- Contained Building Blocks
Name | Responsibility |
---|---|
STAP |
It’s the main application, currently represented as a whitebox. The following sections will break it down in detail. |
WikidataAPI |
External API used as the knowledge hub. |
5.2. Level 1
- Motivation
-
The reasoning behind this separation is to achieve a modular architecture with clear separation of concerns. It also allows to expose the user management and the question generation as APIs.
- Contained Building Blocks
Name | Responsibility |
---|---|
Frontend |
Represents the user interface and manages the quiz logic of the application. |
User Management |
Handles everything related to user accounts. |
Wikidata Service |
Generates questions from Wikidata data. |
Gateway |
Acts as a central hub for managing API traffic. |
- Important Interfaces
Name | Description |
---|---|
Frontend → User Management |
This interface defines how the frontend communicates with the User Management Service to log in, retrieve user data, or perform actions requiring authorization. |
Frontend → Wikidata Service |
This interface defines how the Question Generator Service delivers processed questions to the frontend for display. |
Wikidata Service → Wikidata API |
This interface represents the service fetching data from the Wikidata API. |
5.3. Level 2
5.3.1. User Management Service
- Contained Building Blocks
Name | Responsibility |
---|---|
Authservice |
Manages the authentication of the application |
UserService |
Manages the creation of users and everything related with statistics |
MongoDB |
Stores the information of the users |
- Important Interfaces
Name | Description |
---|---|
Authservice → MongoDB |
Checks if the user who is trying to login is registered in the system and if so, generates a JWT Token |
UserService → MongoDB |
Saves the user in the database if creating one or retrieves/updates the desired statistics |
5.3.2. Wikidata Service
- Contained Building Blocks
Name | Responsibility |
---|---|
Wikidata Service |
Gets information from wikidata api and stores the questions generated by the question generation service. |
Question Generation |
Recieves the data and builds questions based on that |
Wikidata API |
Retrieves the information stored in the wikidata database |
- Important Interfaces
Name | Description |
---|---|
Wikidata Service → Wikidata API |
The service asks wikidata for info by means of a sparql query |
Wikidata Service ←→ Question Generation |
The services pass the data to the question generator and the generator returns the questions well formed |
6. Runtime View
6.1. User’s Login
Sequence diagram for showing the process of a user logging in:
6.2. User’s sign up
Sequence diagram for showing the process of a user creating an account:
6.3. Data retrieval from WikiData
Sequence diagram for the process of retrieving data from WikiData
7. Deployment View
Our project is configurated using GitHub actions in such a way that every release that is made will trigger some unitary and end to end test, and an attempt to deploy the application over a server. This will allow our team to achieve continuous deployment and delivery.
7.1. Quick deployment guide
Using your Azure account:
-
Create an Ubuntu-20.04 virtual machine from Azure www.portal.azure.com
-
Select an available location (usually Switzerland North, Zone 1, is available)
-
Select the virtual machine "Standard B1s (1 vcpu, 1GiB of memory)"
-
Set the username to
azureuser
-
Allow SSH on port 22
-
-
Configure GitHub repository secrets with the server’s information:
-
Download the private key (.pem file) and paste all of its textual content over
DEPLOY_KEY
. Save the file for later configurations over SSH at the virtual machine. -
Check the public IP at Azure and paste it over
DEPLOY_HOST
. -
DEPLOY_USER
does not need to be changed
-
-
Once the virtual machine is created and the repository is configured, go to Network Settings and add extra rules:
-
Open port number 80 for accesing the web application, or 443 in case HTTPS is used
-
Open port number 8000 for giving access to the API gateway
-
Open port number 9091 for giving access to monitoring the application checking some Grafana data
-
-
Configure the virtual machine connecting through SSH for using Docker:
-
Use some tool for connecting to the server using SSH (PuTTY, MobaXterm…)
-
Use the public IP address and the local .pem file for making the connection.
-
Run the following commands for preparing the virtual machine:
sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt update
sudo apt install docker-ce
sudo usermod -aG docker ${USER}
sudo curl -L "https://github.com/docker/compose/releases/download/1.28.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
-
-
Make a release in GitHub:
-
On the right-hand side of the main Code section of our repository, there is a section called Releases. It will be needed to add a new version following the version naming convention.
-
Once the release is made, some GitHub actions will be triggered, and the containers will be tested and running once everything finishes.
-
If some test fails during the process, deployment will be automatically aborted.
-
7.2. Infrastructure
General view of system’s infrastructure
7.3. Infrastructure Level 1 - Azure Ubuntu Server
The Ubuntu server allows us to have a isolated machine with the minimal required configuration and installations for running our services. Having our server on Azure, allows us to minimize the costs of having that machine running, as well as to avoid taking care of some responsabilities such as security, availability or maintainance.
7.4. Infrastructure Level 2 - Docker
Instead of having a virtual machine for running the whole application by itself, the application is splitted into different services that can be completely isolated. Docker allows us to create containers with the minimum amount of resources needed for running that specific service, such that resources are not wasted and services that could be more used do not collapse others. Each container contains the specific docker image for running the specific service. Each implemented service will be isolated at deploy time, so there is no need of making the services at the same programming language or following the same architectural patterns, and responses will be responded through different independent endpoints.
The virtual machine will contain as many containers as services in the application.
For now, the project contains: Web application service running on port 3000 Gateway (middleware) service running on port 8000 Wikidata API running on port 8001 Users API running on port 8003 Mongo DB server running on port 27017 Prometheus running on port 9090 for monitoring ** Grafana running on port 9091 for analytics and monitoring
7.5. Infrastructure Level 3 - GitHub actions
GitHub actions will provide us with continuous automatic delivery and integration, automating the deployment phase at each release.
7.6. Motivation
In the deployment view of our software architecture, we delineate the physical deployment of our system components across various environments. At the core of our deployment strategy is the utilization of cloud-based infrastructure, specifically leveraging Azure for its robustness and scalability. Our server components, including web applications, gateway, user services, and MongoDB servers, are encapsulated within Docker containers to ensure portability and consistency across deployments. Additionally, we employ Azure’s built-in services for auto-scaling, and traffic management to optimize performance and reliability. Continuous integration and deployment pipelines are established using tools like Jenkins or Azure DevOps, facilitating seamless updates and releases of our system components. Monitoring and logging solutions, such as Prometheus and Grafana, are integrated to provide insights into system health and performance. Overall, our deployment view showcases a resilient, scalable, and automated deployment architecture tailored to meet the demands of our system’s evolving requirements.
7.7. Mapping of Building Blocks into Infrastructure
Name | Responsibility |
---|---|
Frontend |
Web App container opened in port 3000. |
User Management |
User service container. |
Wikidata Service |
Wikidata service container. |
Gateway |
API Gateway service opened in port 8000. |
8. Cross-cutting Concepts
8.1. Domain Concepts
8.1.1. Question
In our app, the question is always represent as a data structure with the next format:
{ text: "What is the capital of Asturias?" answers: [Gijón,Oviedo,Cangas de Onís] correctAnswer: 1 wikiLink: "https://www.wikidata.org/wiki/Q14317" }
Benefits:
-
Consistency: This format ensures consistent representation of questions throughout the app, reducing errors and simplifying code maintenance.
-
Clarity: By explicitly defining the data format, developers can clearly understand how to work with question data within the codebase.
-
Flexibility: By defining an array and a correct index, the array could be of multiple sizes.
8.2. UX Concepts
8.2.1. Color
We decided to use a color palette of 4 colors:
Name | Color |
---|---|
Background |
#191919 |
Text |
#f2ecff |
Primary |
#00c896 |
Danger |
#e35a2a |
Benefits:
-
Clarity: Thanks to this simple palette it is very easy to identify when something is correct or not.
-
Consistency: By using a limited set of colors, the overall visual design of the application will be cohesive and harmonious.
-
Accessibility: The chosen colors provide good contrast ratios, ensuring the content is readable and accessible for users with various visual abilities.
-
Branding: The selected colors can be used to reinforce the application’s brand identity and make it recognizable to users.
The chosen color palette strikes a balance between functionality, aesthetics, and branding. The dark background with light text provides a high-contrast theme that is easy on the eyes, while the primary and danger colors are used sparingly to highlight important information or actions.
8.3. Development concepts
8.3.1. Testing and Monitoring
We performed Load Testing, Unit Testing, End-to-end testing and Code Analysis with SonarCloud. The results obtained can be checked here: Appendix I - Testing results
8.3.2. Configurability
The application has simple configurable game features for selecting between two game modes (normal/usual and trivia game mode) and two difficulty levels (easy and hard difficulties). - Normal mode game consists of 10 random questions with an amount of time to answer the question before losing the possibility to answer. Easy and hard modes differ on the amount of time that the user has to answer the question. - Trivia mode game consists of 10 questions, which are generated based on the resulting category of rolling a dice. There are 6 possible categories: sports, science, history, geography and entertainment. Additionally, there is an option at the main application view where random music can be played.
8.3.3. Data access
The development team has followed two different approaches for supporting data access from the running application for development and production. While developing the application, teh development team decided to create a shared database located in the cloud which allowed us to work locally with the same data by means of a key string. In order to move our application into production by means of deploying it into an Azure virtual machine running with Docker containers, the development team created a mongodb container with an associated volumen for making the data persistent.
9. Architecture Decisions
Along the process of developing the application, decisions had to be taken as problems arise. These are the final decicision that we have made according to their advantages. If you want a description about each of the technologies we have chosen, go to the Glossary of the documentation.
9.1. Microservices architecture
The team opted for a microservices architecture as the foundation of our system due to the advantages it provides. By breaking down our application into smaller, independently deployable services, we gain scalability and flexibility. Each microservice operates autonomously, allowing us to develop, deploy, and update components without affecting the entire system. Furthermore, microservices promote technology diversity, enabling us to choose the best tools for each service’s specific needs. By means of an API Gateway, all the services can comunicate to server all the services as only one.
9.2. API Gateway
To streamline communication between our backend services, we’ve implemented an API gateway. This gateway acts as a central hub, providing a unified entry point for all client requests. By consolidating communication through the API gateway, we simplify access control, load balancing, and monitoring across our system. This approach enhances scalability and maintainability while enabling us to implement cross-cutting concerns such as authentication and rate limiting in a centralized manner. The API gateway plays a pivotal role in orchestrating interactions between services, optimizing performance, and ensuring a cohesive and reliable architecture.
9.3. Docker containers
Docker containers are used for our web application and an API gateway for inter-service communication, driven by their portability, scalability, and maintainability advantages. Docker ensures consistent deployment across environments, facilitating independent scaling of services. By routing communication through the API gateway, we centralize access control and monitoring, simplifying management and promoting modularity and flexibility. This approach optimizes system management, scalability, and interoperability, aligning with our project’s architectural goals while enhancing monitoring capabilities for streamlined performance tracking and issue resolution.
9.4. React & Tailwind CSS
We’re building our web application with React and Tailwind CSS for their efficiency and modern development approach. React’s component-based architecture simplifies UI creation and updates, while Tailwind CSS’s utility-first framework streamlines styling for rapid prototyping and consistent design. This combination allows us to create a visually appealing and highly responsive web application efficiently, aligning with our goal of delivering a modern, user-friendly interface while maintaining flexibility and scalability in our frontend development process.
9.5. Node.js
Initially the Wikidata service for generating game questions was developed using .NET. However, encountering deployment issues with Docker in Azure prompted us to migrate all backend services to Node.js and Express. This strategic move ensures a smoother, more reliable and even more comfortable deployment process, enhancing system reliability and maintainability.
We’ve chosen Node.js with Express for developing all backend services due to its lightweight, efficient, and scalable nature thanks to modularity. Node.js offers non-blocking I/O operations, enabling high concurrency and responsiveness, which is crucial for handling asynchronous tasks common in web applications. Express, a minimalist web framework for Node.js, simplifies the development of robust and RESTful APIs, providing essential features like routing, middleware support, and error handling. Additionally, the vibrant ecosystem of Node.js libraries and modules enhances productivity and enables seamless integration with other technologies and services. Overall, Node.js with Express empowers us to build performant, scalable, and maintainable backend services that align with our project’s requirements and architectural goals.
The following table contains the most interesting the design decisions that we have taken with their advantages and disadvantages:
Decision | Advantages | Disadvantages | React.js |
---|---|---|---|
Quite easy to learn in comparison to other front-end libraries. Increasingly popular in the web. |
Not all of us know about its usage |
Tailwind CSS |
Consistent and unified design system and its ability to speed up the development process. Rapidly growing utility-first CSS framework |
Quite new for most of us |
MongoDB |
It does not need to be started manually. Free and easy to understand |
We are quite new with MongoDB. |
Docker |
Fast deployment, ease of moving/maintaining your applications. Easy as we already have DockerFiles example |
We do not have much experience using Docker |
PlantUML |
Allows drawing diagrams very easily, with a simple syntax. |
Does not allow as much control over the exact layout of the elements in the diagram as other tools. |
Node.js |
For small applications it’s a very fast techonology. It’s easy to learn and we already know a bit about it |
Its performance is reduced with heavy computational tasks |
Wikidata API also in Node.js |
Better project structure. Same language as users API. Easier for us to deploy it |
Its performance is reduced with heavy computational tasks |
10. Quality Requirements
10.1. Quality Tree
10.2. Quality Scenarios
Usage scenarios
Quality goal | Motivation | Usage scenario | Priority |
---|---|---|---|
Reliability |
The application must provide users with constistent performance and predictable results. |
When users access the web it must behave the same every time giving the almost equal results and response times. |
Very high |
Performance |
The application must have a reasonable response time. Slow applications are not positively popular in society. |
The application must be able to give a response time of at least 5 seconds with 10 concurrent users. |
Very high |
Security |
Our web must be secure not only to protect data but to provide a realiable solution to our users. If we can’t assure our clients the web is secure, no one will use it. |
Data will be only accessible by its owner. If a user tries to access other people’s information, the system will deny the operation, as data will be stored in a secure system. |
Very high |
Usability |
To make the website stand out from the competition, it must be easy to use, attract attention and be aestethic. |
The user must be able to do identify the game elements shown in the screen as well as the menu for the different functionalities as the view profile or the logout ones. |
Very high |
Portability |
To reach the maximum number of users the application must work in the maximum number of infrastructures. |
The game experience and functionalities must be the same independently from the device which the user is connecting from. |
High |
Testability |
All features of the application must be testable in order to verify that the web built was the one asked for. |
The unit tests passed by the developers must generate an observable output. |
High |
Availability |
The application must be available 24 hours a day all weeks. |
The user must be able to play at any time because it will be its free time. |
High |
Change scenarios
Quality goal | Motivation | Change scenario | Priority |
---|---|---|---|
Maintainability |
An application should be maintainable to remain usable over the years and to be able to improve functionalities and to fix misfunctionalities. |
When developers must introduce a new feature to the web, they should be able to do it without changing the software architecture. |
High |
11. Risks and Technical Debts
This section contains a list of identified risks that the project will face during its lifetime. In addition to it, each particular risk comes with a brief self-explanatory description, the probability of its occurrence, its impact on the project and a solution on how to minimize it or mitigate it.
11.1. Risks
Risk | Description | Probability | Impact | Solution |
---|---|---|---|---|
Complications with the project characteristics |
Almost everyone on the team has never done a project of such a size, and there may be some trouble. |
Medium |
High |
Each member will try to maximize its knowledge on some aspect of the project in the first weeks, in order to be able to be something similar to a leader in each one of the posible key aspects of the project. |
Problems with wikidata |
The team only used wikidata once before and not even everyone of us. |
High |
Very high |
We must read some documentation and try out some basic features to familiarize with wikidata. |
Teamwork issues |
The members of the team have never worked together. This may cause problems such as lack of communication or trust in each other’s work. |
Medium |
Medium |
We will try to keep in touch a few times a week, to see each ones progress on our tasks and we will try to build some confidence with each other throughout the development of the project as most of us met on this subject. |
Differences with technologies |
There are some members that don’t know as much in some aspects of the development |
Medium |
Low |
The members that know more on each of the aspects will help the others understand the things they could find difficult. |
Deadlines |
The project is based on some deadline days when our work is presented |
Very high |
High |
The team will follow the planification of the project to avoid problems on each one of the deadlines. |
11.2. Technical Debts
The day when wikidata is outdated could come, and the app could still be working. It’s quite difficult but it could happen.
The fact of using wikidata for retrieving the questions could mean that if the service of wikidata fails for some reason, the app would be failing as well.
12. Glossary
Term | Definition |
---|---|
STAP |
A web application where users can register and login in order to play. The game consists on answering a number of questions with different types and subjects obtaining a prize for each question well answered. |
Wikidata |
It is a collaborative, free and open knowledge base that stores structured information. It aims to provide a common source of data that can be used by Wikimedia projects and anyone else, under a public domain license. |
Saber y ganar |
It is a Spanish television quiz show. It involves contestants competing in several rounds of questions to test their knowledge in different categories. |
Diagram |
A visual representation of information, data flow, processes, or systems using symbols, shapes, and lines to illustrate relationships, connections, and concepts. |
Front-ent |
Refers to the part of a software application or website that users interact with directly. It encompasses the user interface, design elements, and functionality visible to users. |
Back-end |
The behind-the-scenes part of a software application or website responsible for handling data processing, server-side logic, and database interactions. It includes the server, database, and application logic that users do not directly interact with. |
Microservices |
An architectural approach to building software applications as a collection of small, loosely coupled services. Each service is designed to perform a specific business function and can be developed, deployed, and scaled independently. |
Stakeholder |
Individuals or groups with an interest or concern in a project, product, or organization. Stakeholders may include any party affected by or involved in the outcomes of a particular initiative. |
Docker |
A platform for developing, shipping, and running applications in containers. It allows developers to package applications and their dependencies into standardized units called containers, providing a consistent environment for software deployment across different computing environments. |
Deployment |
The process of making a software application, website, or service available for use. It involves taking the codebase of a developed application and installing it onto servers or other computing infrastructure so that it can be accessed by end-users. |
MongoDB |
A popular open-source NoSQL database management system known for its flexibility, scalability, and ease of use. It stores data in a flexible, JSON-like format called BSON and is commonly used for applications requiring high-volume data storage and real-time data processing. |
API (Application programming interface) |
Set of rules and protocols that allows different software applications to communicate and interact with each other. APIs define the methods and data formats that applications can use to request and exchange information. They enable developers to access the functionality of other software components or services without having to understand their internal workings. APIs are commonly used to integrate third-party services, access data from remote servers, and build modular and interoperable software systems. |
13. Appendix I - Testing results
13.1. Testing and Monitoring
-
Unit Testing: Unit testing plays a crucial role in our development workflow, providing confidence in the stability and functionality of our application. By utilizing .test.tsx files for our React components and .test.js files for our Node.js backend, we ensure that our code behaves as expected, regardless of the environment. This comprehensive approach to testing allows us to catch and address issues early in the development cycle, leading to higher-quality software and smoother deployment processes.
-
Code Analysis: We employed the SonarCloud tool to monitor the code covered by these tests, while Continuous Integration practices were implemented using GitHub Actions.
Results for web application:
Results for gateway service:
Results for users service:
Results for wikidata service:
-
Load Testing and Monitoring: We used Gattling for recording user simulations that consist on login into the application, playing the existing game modes and checking the statistics and leaderboards. Then, Gatling allowed us to also measure the performance of the application and the average response times when creating 2, 5, 10 and 25 users per second during 60 seconds performing the previously mentioned simulations, using 2 different Azure machines. Particularly, load testing has been performed using a 1-cpu machine with almost 1GB of RAM memory and also, using a 2-cpu machine with 8GB of RAM memory.
-
1-cpu machine
With this machine all requests were answered quickly when 2 users per second where created (120 users in total) with almost all requests responding in less than 1 second with a minimum response time of just 24 milliseconds and a maximum reponse time of 1.6 seconds.
-
When creating 5 users per second (350 users in total), the responses took almost all of them (around 80%) less than a second with a minimum response time of 24 milliseconds, but with a maximum response time of 10 seconds which is a huge amount of time for a web application.
Unfortunately, when creating a load of 10 user per second (600 users in total) 41% of the responses were failling and around 70% of the responses where failing or taking more than a second to be answered. For sure, when creating even more load, almost all response were going to fail.
With this 1-cpu and 1GB of RAM azure machine we could afford around 200 hundred users making constant resquests without having a denial of service and providing reasonable requests' response times.
-
2-cpu machine
With this machine all requests were answered quickly when 2 users per second where created (120 users in total) with almost all requests responding in less than 1 second with a minimum response time of just 24 milliseconds and a maximum reponse time of 1.8 seconds.
When creating 5 users per second (350 users in total), the responses took almost all of them (around 80%) less than a second with a minimum response time of 24 milliseconds, but with a maximum response time of 10 seconds which is exactly the same time we obtained with the other machine and a similar mean response time.
When creating a load of 10 user per second (600 users in total), no response failed although the maximum response time was 58 seconds with a mean response time of 2 seconds, which meant that almost every respone took less than a second.
Finally, when creating a load of 25 user per second (1500 users in total), just a 7% of the total amount of response failed and the maximum response time was 60 seconds with a mean response time of 5 seconds.
Using this 2-cpu and 8GB machine, response times are not enhaced when using more powerfull hardware and most of the work should be done programatically by improving our software. But better hardware allows us to support much more load on the application, which means more users playing at the same time. This time, we could support more 400, but less than 800 simultaneous users. Taking a look at the specific graph results of each of thesimulations performed, most of load is always provoked at the beggining of the simulation when the users have to login. Then, when users are playing some games the amount of response is reduced a lot since all the information for playing the game is asked at the beggining.
As a general conclusion, following the azure payment plan for virtual machines: With low load requirements and just paying around 35$ each month we could afford a 1-cpu and 1GB RAM memory web server supporting around 200 simultaneous users using the application at the same time. On the contrary, if higher load requirements are needed and paying around 100$ for a 2-cpu and 8GB of RAM memory server, the amount of supported users is more than duplicated supporting more than 400 simultaneous users.
-
E2e Testing: We used behavior-driven development scenarios written in the Gherking language as a basis for our end-to-end tests. We developed seven e2e tests that check the main paths for our application, making sure everything works fine when integrated together. We test every endpoint on the app up until the information may depend in each execution of the petition.