1. Introduction and Goals

The STAP (Smarter Than A Penguin) web application is developed for RTVE to create an experimental version of the 'Saber y Ganar' quiz show. The primary goal of STAP is to provide users with an engaging platform where they can participate in quiz games, answer questions generated from Wikidata, and win prizes.

This document outlines the essential requirements guiding the software architects and development team in creating STAP.

Describes the relevant requirements that software architects and development team must consider. These include * underlying business goals, * essential features, * essential functional requirements, * quality goals for the architecture and * relevant stakeholders and their expectations

1.1. Requirements Overview

The system aims to fulfill the following essential requirements:

  1. Users can register and login to participate in quiz games.

  2. Questions are automatically generated from data available in Wikidata.

  3. Users receive historical data of their participation, including the number of games played, questions passed and failed, and timestamps.

  4. Each question must be answered within a specific time limit.

  5. Questions consist of one correct answer and several distractors, all automatically generated.

  6. Access to user information and generated questions is available through an API.

Contents

Short description of the functional requirements, driving forces, extract (or abstract) of requirements. Link to (hopefully existing) requirements documents (with version number and information where to find it).

Motivation

From the point of view of the end users a system is created or modified to improve support of a business activity and/or improve the quality.

Form

Short textual description, probably in tabular use-case format. If requirements documents exist this overview should refer to these documents.

Keep these excerpts as short as possible. Balance readability of this document with potential redundancy w.r.t to requirements documents.

Further Information

See Introduction and Goals in the arc42 documentation.

1.2. Quality Goals

Quality Goal

Description

Reliability

Ensure consistent and accurate question generation and user data management.

Performance

Optimize system response times and capacity to handle multiple user interactions simultaneously.

Security

Implement robust security measures to protect user data and prevent unauthorized access.

Usability

Provide an intuitive and user-friendly interface to enhance user experience.

Portability

Enable seamless deployment and operation across different environments and platforms.

Testability

Facilitate comprehensive testing to ensure software correctness and identify potential issues early.

Availability

Ensure system uptime and accessibility to meet user demands and minimize downtime.

Contents

The top three (max five) quality goals for the architecture whose fulfillment is of highest importance to the major stakeholders. We really mean quality goals for the architecture. Don’t confuse them with project goals. They are not necessarily identical.

Consider this overview of potential topics (based upon the ISO 25010 standard):

Categories of Quality Requirements
Motivation

You should know the quality goals of your most important stakeholders, since they will influence fundamental architectural decisions. Make sure to be very concrete about these qualities, avoid buzzwords. If you as an architect do not know how the quality of your work will be judged…​

Form

A table with quality goals and concrete scenarios, ordered by priorities

1.3. Stakeholders

Contents

Explicit overview of stakeholders of the system, i.e. all person, roles or organizations that

  • should know the architecture

  • have to be convinced of the architecture

  • have to work with the architecture or with code

  • need the documentation of the architecture for their work

  • have to come up with decisions about the system or its development

Motivation

You should know all parties involved in development of the system or affected by the system. Otherwise, you may get nasty surprises later in the development process. These stakeholders determine the extent and the level of detail of your work and its results.

Form

Table with role names, person names, and their expectations with respect to the architecture and its documentation.

Role/Name Contact Expectations

Users

N/A

Intuitive and enjoyable quiz experience

Professors

Pablo González (gonzalezgpablo@uniovi.es)
Jose Emilio Labra (labra@uniovi.es)
Cristian Augusto Alonso (augustocristian@uniovi.es)
Jorge Álvarez Fidalgo (galvarezfidalgo@uniovi.es)

The well-designed web application that fulfills the requirements

RTVE

https://www.rtve.es

Reliable and engaging platform for users

Development

Sergio Truébano Robles (uo289930@uniovi.es)
Pedro Limeres Granado (uo282763@uniovi.es)
Alberto Guerra Rodas (uo282421@uniovi.es)
Ángel Macías Rodríguez (uo289362@uniovi.es)
Rita Fernández-Catuxo Ortiz (uo284185@uniovi.es)
Vira Terletska (uo305097@uniovi.es)
Sergio Llenderrozos Piñera (uo283367@uniovi.es)

Clear documentation and reliable, performant and available system

2. Architecture Constraints

When designing the STAP application, there are several constraints that must be taken into consideration, as they will have a significant impact on the overall design of the application and the architectural decisions. These constraints must be considered in order to ensure that the final product meets the needs and expectations of the users and stakeholders. The following table summarizes these constraints and provides a brief explanation for each one divided into technical, organizational and political constraints.

2.1. Technical constraints

Constraint Explanation

WikiData

Our application must generate questions automatically getting data from WikiData

Version control and monitoring (GitHub)

For the STAP application, GitHub is a useful tool for version control and collaboration among the team members working on the project. It allows easier coordination and organization of the development process, as well as keeping track of changes and contributions made by each team member.

User Experience

The design of the application must make its use friendly and easy

Test coverage

Code must meet a good test quality and coverage to ensure the expected outcome.

2.2. Organizational constraints

Constraint Explanation

Team

The project will be done in a team composed of 7 students, so work must be assigned accordingly.

Git-based development

The project will be built around the Git workflow, so all tools used must be able to closely interact with this system.

Meetings

The project’s development process must be reflected in the minutes of each meeting that happens.

Delivery deadlines

There are 4 deliverables every 3 weeks that should be followed accordingly before the deployment of the application

2.3. Political constraints

Constraint Explanation

Documentation

We are going to use AsciiDoc and follow the Arc42 template.

Language

The documentation and application will be developed in English.

Contents

Any requirement that constraints software architects in their freedom of design and implementation decisions or decision about the development process. These constraints sometimes go beyond individual systems and are valid for whole organizations and companies.

Motivation

Architects should know exactly where they are free in their design decisions and where they must adhere to constraints. Constraints must always be dealt with; they may be negotiable, though.

Form

Simple tables of constraints with explanations. If needed you can subdivide them into technical constraints, organizational and political constraints and conventions (e.g. programming or versioning guidelines, documentation or naming conventions)

Further Information

See Architecture Constraints in the arc42 documentation.

3. System Scope and Context

Contents

System scope and context - as the name suggests - delimits your system (i.e. your scope) from all its communication partners (neighboring systems and users, i.e. the context of your system). It thereby specifies the external interfaces.

If necessary, differentiate the business context (domain specific inputs and outputs) from the technical context (channels, protocols, hardware).

Motivation

The domain interfaces and technical interfaces to communication partners are among your system’s most critical aspects. Make sure that you completely understand them.

Form

Various options:

  • Context diagrams

  • Lists of communication partners and their interfaces.

Further Information

See Context and Scope in the arc42 documentation.

3.1. Business Context

Contents

Specification of all communication partners (users, IT-systems, …​) with explanations of domain specific inputs and outputs or interfaces. Optionally you can add domain specific formats or communication protocols.

Motivation

All stakeholders should understand which data are exchanged with the environment of the system.

Form

All kinds of diagrams that show the system as a black box and specify the domain interfaces to communication partners.

Alternatively (or additionally) you can use a table. The title of the table is the name of your system, the three columns contain the name of the communication partner, the inputs, and the outputs.

Diagram
  • Player (user): The user interacts with the STAP web application using the front-end of the application.

  • STAP System (core system): System that allows the players to play question games based on information from the wikidata api

  • Wikidata API (external system): Api which exposes the information stored in wikidata database

3.2. Technical Context

Contents

Technical interfaces (channels and transmission media) linking your system to its environment. In addition a mapping of domain specific input/output to the channels, i.e. an explanation which I/O uses which channel.

Motivation

Many stakeholders make architectural decision based on the technical interfaces between the system and its context. Especially infrastructure or hardware designers decide these technical interfaces.

Form

E.g. UML deployment diagram describing channels to neighboring systems, together with a mapping table showing the relationships between channels and input/output.

Table 1. Table of the Technical Context
Component Technologies Used

Front-end

HTML, CSS (Tailwind), JavaScript (React)

Backend

node.js (Express), Wikidata’s API

Database

MongoDB

Arquitechture

Microservices

Deployment and Maintenance

Docker

Diagram
Figure 1. Diagram of the Technical Context
Table 2. Mapping Input/Output to Channels
Component Functionality

Front-end

User interaction and results display.

Backend

Logical processing, communication with external API and database.

Database

Data storage.

External API

Data query from Wikidata.

In this flow: - The user interacts with the user interface (front-end) through clicks and responses. - The Backend processes the requests, consults the Wikidata API, and updates the screen. - The channels are the HTTP connections between the components. - The mapping evaluates the user’s responses in real time to provide an appropriate response.

4. Solution Strategy

This section will cover all the technological, architectural, design and organizational decisions made along the project for its appropiate development

Contents

A short summary and explanation of the fundamental decisions and solution strategies, that shape system architecture. It includes

  • technology decisions

  • decisions about the top-level decomposition of the system, e.g. usage of an architectural pattern or design pattern

  • decisions on how to achieve key quality goals

  • relevant organizational decisions, e.g. selecting a development process or delegating certain tasks to third parties.

Motivation

These decisions form the cornerstones for your architecture. They are the foundation for many other detailed decisions or implementation rules.

Form

Keep the explanations of such key decisions short.

Motivate what was decided and why it was decided that way, based upon problem statement, quality goals and key constraints. Refer to details in the following sections.

Further Information

See Solution Strategy in the arc42 documentation.

4.1. Technologies

  • React: JavaScript library for web and native user interfaces. It allows developers to create interactive web applications by breaking down the UI into reusable components. React uses a declarative approach to efficiently update and render components, resulting in faster and more maintainable code. It’s widely adopted in the industry due to its simplicity, performance, and robustness.

  • Node.js: JavaScript runtime that enables running JavaScript code outside of web browsers. It’s renowned for its event-driven architecture and extensive collection of packages, making it ideal for building scalable server-side applications.

    • Express.js: Express.js, often simply called Express, is a minimalist web application framework for Node.js. It simplifies the process of building web applications by providing a robust set of features, including middleware support, routing, and templating engines. Express is known for its flexibility, simplicity, and performance, making it a popular choice for developing web applications and APIs in Node.js.

  • Wikidata: Wikidata provides a REST API for retrieving information related to any topic. It helps us to dynamically generate questions for our game using it from any programming language.

  • MongoDB: popular NoSQL database known for its flexibility and scalability. It stores data in flexible JSON-like documents and is widely used in modern web development for its simplicity and ability to handle large volumes of data.

  • SonarCloud: Cloud-based service provided by SonarSource, which offers continuous code quality analysis and automated code reviews for software development projects. It helps developers identify and fix bugs, security vulnerabilities, and code smells in their codebase to improve overall software quality.

  • Arc42: framework (template) used for documenting and communicating software architectures. It provides a template for describing the architecture of a software system, covering aspects such as stakeholders, requirements, architecture decisions, components, interfaces, and quality attributes. arc42 helps teams create consistent and comprehensible architecture documentation, enabling better communication, understanding, and maintenance of software systems throughout their lifecycle.

  • npm: default package manager for Node.js, providing a command-line interface to install, manage, and publish JavaScript packages. With over a million packages available in its registry, npm simplifies adding functionality to Node.js projects by handling dependencies and providing tools for versioning and publishing packages.

  • Docker: platform that will be used for deploying our services inside containers. Containers are lightweight, portable, and self-sufficient units that contain everything needed to run an application, including the code, runtime, system tools, libraries, and settings. Docker enables developers to package their applications along with all dependencies into containers, ensuring consistency across different environments, such as development, testing, and production.

  • GitHub Actions: built-in automation tool on GitHub that allows us to automate some workflows that are triggered after some specific github branches actions at development. It provides as continuous integration of the game functionality.

  • Gatling: Load test tool that allows us to record some user interaction from our application and simulate it as if various differnet users were accessing the application.

  • Prometheus: monitoring and alerting toolkit designed for reliability and scalability. It collects metrics from configured targets at specified intervals, stores them efficiently, and provides a powerful query language for analyzing and alerting on these metrics. It’s particularly well-suited for dynamic environments like cloud-native applications and microservices architectures.

  • Grafana: open-source platform for monitoring and observability, providing customizable dashboards and visualization tools for analyzing metrics, logs, and other data sources. It allows users to create dynamic, interactive dashboards to monitor the health and performance of their systems and applications.

  • Azure: Cloud computing service used for creating virtual machines and running Docker containers. Azure provides a scalable and flexible infrastructure for hosting our microservices-based application, ensuring high availability and reliability.

  • GitHub: Version control and project management platform used for managing our game project. GitHub provides features for collaboration, issue tracking, and code review, facilitating efficient development workflows and team communication.

  • Tailwind CSS: Utility-first CSS framework for creating custom designs without having to write CSS from scratch. Tailwind CSS offers a set of pre-defined utility classes that can be applied directly in HTML markup, enabling rapid development and consistent styling across the application.

4.2. Technological decisions

At the beggining of the project, the team decided to develop the wikidata API by means of .NET technology and C# programming language. As part of continuous integration, the application was attempted to be deployed without success due to docker issues with .NET container. Therefore, the team decided to migrate the whole API to Node.js using javascript and express framework. As a conclusion, it was worth spending time making the migration for reducing the number of potential issues at deployment time.

4.3. Solution strategy in context with quality attributes

Quality goal Scenario Solution approach Link to Details

Reliability

Ensure system stability even under high loads or failure scenarios

Perform load test and asses the system reliability as well as providing the user with correct and consistent error messages when needed

Development concepts section inside Cross-cutting Concepts

Performance

Maintain fast response times even under heavy usage

Retrieve wikidata information before hand for giving quick response times and perform load test and asses the system reliability

<<>>

Security

Protect sensitive data and prevent unauthorized access

Implementing encryption and a logging system

User’s login inside Runtime View

Usability

Ensure the system is intuitive and easy to use

Conducting user testing, improving user interface design

Usability tests inside Cross-cutting Concepts

Portability

Enable the system to run across different platforms

Using docker containerization, adhering to standards

Deployment View

Testability

Facilitate thorough testing and validation of system functionality

Implementing automated testing frameworks, ensuring code coverage

Testing inside Cross-cutting Concepts

Availability

Ensure the system is accessible and operational when needed

Implementing monitoring, proactive maintenance, and disaster recovery plans

Monitoring with Grafana inside Cross-cutting Concepts

4.4. Architecture & Design

  • Microservices: Our game is built using a microservices architecture, which structures the application as a collection of loosely coupled services. Each service encapsulates a specific functionality or business capability, allowing for independent development, deployment, and scaling. By adopting microservices, we promote modularity and flexibility, enabling rapid iteration and innovation.

  • Containerization with Docker: We leverage Docker containerization to package each microservice and its dependencies into lightweight, portable containers. Docker provides a consistent environment across different stages of the development lifecycle, ensuring seamless deployment and scalability. With Docker, we can easily spin up new instances of services, manage dependencies, and streamline our development and deployment workflows.

  • API Gateway: We employ an API gateway as a centralized entry point for all client requests to our microservices. The API gateway serves as a reverse proxy, routing incoming requests to the appropriate microservice based on predefined rules and policies. It provides a unified interface for clients to interact with our system, abstracting away the complexities of the underlying microservices architecture. By consolidating access through the API gateway, we enhance security, governance, and performance while simplifying client interactions.

  • Scalability and Elasticity: With our microservices architecture orchestrated with Docker, we achieve horizontal scalability and elasticity to handle fluctuations in traffic and workload. Docker’s container-based approach enables us to dynamically scale individual services based on demand, ensuring optimal resource utilization and cost efficiency. Combined with automated scaling policies and monitoring, we maintain responsiveness and availability during peak usage periods.

  • Observability and Monitoring: We prioritize observability and monitoring in our architecture to gain insights into the performance, health, and behavior of our microservices. Leveraging tools such as Prometheus, Grafana, and ELK stack, we collect metrics, logs, and traces from across our infrastructure, allowing us to detect anomalies, troubleshoot issues, and optimize system performance. With comprehensive observability, we ensure reliability, maintainability, and continuous improvement of our game platform.

4.5. Team Organization

For developing this project we are using Github as the control version systems. The master branch contains the final version of the product, so that every accepted pull request to master branch will be considered as a new release. The production branch contains the work in production right now, from where everybody should create their own branch for their specific code development.

  • Documentation: it must be always updated for making our work valuable and consistent.

  • Weekly meetings: Weekly discussions about what has been done and what needs to be done will be key for our team success.

  • Github: this control version systems not only allows us to share and collabortively write code, but also provides other resources such as issues and project management (kanban board) tools for organizing the work to be done. Also, wiki section allows us to save all of our minutes from each scheduled meeting.

  • Whatsapp: will allow us to be in constant communication for helping each other out whenever needed.

  • Discord: useful for making unofficial meetings and making decisions whenever is impossible for all of us to be present in an specific place.

5. Building Block View

Content

The building block view shows the static decomposition of the system into building blocks (modules, components, subsystems, classes, interfaces, packages, libraries, frameworks, layers, partitions, tiers, functions, macros, operations, data structures, …​) as well as their dependencies (relationships, associations, …​)

This view is mandatory for every architecture documentation. In analogy to a house this is the floor plan.

Motivation

Maintain an overview of your source code by making its structure understandable through abstraction.

This allows you to communicate with your stakeholder on an abstract level without disclosing implementation details.

Form

The building block view is a hierarchical collection of black boxes and white boxes (see figure below) and their descriptions.

Hierarchy of building blocks

Level 1 is the white box description of the overall system together with black box descriptions of all contained building blocks.

Level 2 zooms into some building blocks of level 1. Thus it contains the white box description of selected building blocks of level 1, together with black box descriptions of their internal building blocks.

Level 3 zooms into selected building blocks of level 2, and so on.

Further Information

See Building Block View in the arc42 documentation.

5.1. Whitebox Overall System

Here you describe the decomposition of the overall system using the following white box template. It contains

  • an overview diagram

  • a motivation for the decomposition

  • black box descriptions of the contained building blocks. For these we offer you alternatives:

    • use one table for a short and pragmatic overview of all contained building blocks and their interfaces

    • use a list of black box descriptions of the building blocks according to the black box template (see below). Depending on your choice of tool this list could be sub-chapters (in text files), sub-pages (in a Wiki) or nested elements (in a modeling tool).

  • (optional:) important interfaces, that are not explained in the black box templates of a building block, but are very important for understanding the white box. Since there are so many ways to specify interfaces why do not provide a specific template for them. In the worst case you have to specify and describe syntax, semantics, protocols, error handling, restrictions, versions, qualities, necessary compatibilities and many things more. In the best case you will get away with examples or simple signatures.

Diagram
Motivation

This is a basic introduction to the app, highlighting the external services it uses and how they work together.

Contained Building Blocks
Name Responsibility

STAP

 It’s the main application, currently represented as a whitebox. The following sections will break it down in detail.

WikidataAPI

 External API used as the knowledge hub.

5.2. Level 1

Here you can specify the inner structure of (some) building blocks from level 1 as white boxes.

You have to decide which building blocks of your system are important enough to justify such a detailed description. Please prefer relevance over completeness. Specify important, surprising, risky, complex or volatile building blocks. Leave out normal, simple, boring or standardized parts of your system

Diagram
Motivation

The reasoning behind this separation is to achieve a modular architecture with clear separation of concerns. It also allows to expose the user management and the question generation as APIs.

Contained Building Blocks
Name Responsibility

Frontend

 Represents the user interface and manages the quiz logic of the application.

User Management

 Handles everything related to user accounts.

Wikidata Service

 Generates questions from Wikidata data.

Gateway

Acts as a central hub for managing API traffic.

Important Interfaces
Name Description

Frontend → User Management

 This interface defines how the frontend communicates with the User Management Service to log in, retrieve user data, or perform actions requiring authorization.

Frontend → Wikidata Service

 This interface defines how the Question Generator Service delivers processed questions to the frontend for display.

Wikidata Service → Wikidata API

This interface represents the service fetching data from the Wikidata API.

Insert your explanations of black boxes from level 1:

If you use tabular form you will only describe your black boxes with name and responsibility according to the following schema:

Name Responsibility

<black box 1>

 <Text>

<black box 2>

 <Text>

If you use a list of black box descriptions then you fill in a separate black box template for every important building block . Its headline is the name of the black box.

Here you describe <black box 1> according the the following black box template:

  • Purpose/Responsibility

  • Interface(s), when they are not extracted as separate paragraphs. This interfaces may include qualities and performance characteristics.

  • (Optional) Quality-/Performance characteristics of the black box, e.g.availability, run time behavior, …​.

  • (Optional) directory/file location

  • (Optional) Fulfilled requirements (if you need traceability to requirements).

  • (Optional) Open issues/problems/risks

<Purpose/Responsibility>

<Interface(s)>

<(Optional) Quality/Performance Characteristics>

<(Optional) Directory/File Location>

<(Optional) Fulfilled Requirements>

<(optional) Open Issues/Problems/Risks>

==== <Name black box 2>

<black box template>

==== <Name black box n>

<black box template>

==== <Name interface 1>

…​

==== <Name interface m>

5.3. Level 2

Here you can specify the inner structure of (some) building blocks from level 1 as white boxes.

You have to decide which building blocks of your system are important enough to justify such a detailed description. Please prefer relevance over completeness. Specify important, surprising, risky, complex or volatile building blocks. Leave out normal, simple, boring or standardized parts of your system

5.3.1. User Management Service

…​describes the internal structure of the User Management Service.

Diagram
Contained Building Blocks
Name Responsibility

Authservice

Manages the authentication of the application

UserService

Manages the creation of users and everything related with statistics

MongoDB

Stores the information of the users

Important Interfaces
Name Description

Authservice → MongoDB

Checks if the user who is trying to login is registered in the system and if so, generates a JWT Token

UserService → MongoDB

 Saves the user in the database if creating one or retrieves/updates the desired statistics

5.3.2. Wikidata Service

…​describes the internal structure of the Question Generation Service.

Diagram
Contained Building Blocks
Name Responsibility

Wikidata Service

Gets information from wikidata api and stores the questions generated by the question generation service.

Question Generation

Recieves the data and builds questions based on that

Wikidata API

Retrieves the information stored in the wikidata database

Important Interfaces
Name Description

Wikidata Service → Wikidata API

The service asks wikidata for info by means of a sparql query

Wikidata Service ←→ Question Generation

 The services pass the data to the question generator and the generator returns the questions well formed

6. Runtime View

6.1. User’s Login

Sequence diagram for showing the process of a user logging in:

Login diagram

6.2. User’s sign up

Sequence diagram for showing the process of a user creating an account:

Sign Up diagram

6.3. Data retrieval from WikiData

Sequence diagram for the process of retrieving data from WikiData

WikiData diagram

7. Deployment View

Content

The deployment view describes:

  1. technical infrastructure used to execute your system, with infrastructure elements like geographical locations, environments, computers, processors, channels and net topologies as well as other infrastructure elements and

  2. mapping of (software) building blocks to that infrastructure elements.

Often systems are executed in different environments, e.g. development environment, test environment, production environment. In such cases you should document all relevant environments.

Especially document a deployment view if your software is executed as distributed system with more than one computer, processor, server or container or when you design and construct your own hardware processors and chips.

From a software perspective it is sufficient to capture only those elements of an infrastructure that are needed to show a deployment of your building blocks. Hardware architects can go beyond that and describe an infrastructure to any level of detail they need to capture.

Motivation

Software does not run without hardware. This underlying infrastructure can and will influence a system and/or some cross-cutting concepts. Therefore, there is a need to know the infrastructure.

Form

Maybe a highest level deployment diagram is already contained in section 3.2. as technical context with your own infrastructure as ONE black box. In this section one can zoom into this black box using additional deployment diagrams:

  • UML offers deployment diagrams to express that view. Use it, probably with nested diagrams, when your infrastructure is more complex.

  • When your (hardware) stakeholders prefer other kinds of diagrams rather than a deployment diagram, let them use any kind that is able to show nodes and channels of the infrastructure.

Further Information

See Deployment View in the arc42 documentation.

Our project is configurated using GitHub actions in such a way that every release that is made will trigger some unitary and end to end test, and an attempt to deploy the application over a server. This will allow our team to achieve continuous deployment and delivery.

7.1. Quick deployment guide

Using your Azure account:

  • Create an Ubuntu-20.04 virtual machine from Azure www.portal.azure.com

    • Select an available location (usually Switzerland North, Zone 1, is available)

    • Select the virtual machine "Standard B1s (1 vcpu, 1GiB of memory)"

    • Set the username to azureuser

    • Allow SSH on port 22

  • Configure GitHub repository secrets with the server’s information:

    • Download the private key (.pem file) and paste all of its textual content over DEPLOY_KEY. Save the file for later configurations over SSH at the virtual machine.

    • Check the public IP at Azure and paste it over DEPLOY_HOST.

    • DEPLOY_USER does not need to be changed

  • Once the virtual machine is created and the repository is configured, go to Network Settings and add extra rules:

    • Open port number 80 for accesing the web application, or 443 in case HTTPS is used

    • Open port number 8000 for giving access to the API gateway

    • Open port number 9091 for giving access to monitoring the application checking some Grafana data

  • Configure the virtual machine connecting through SSH for using Docker:

    • Use some tool for connecting to the server using SSH (PuTTY, MobaXterm…​)

    • Use the public IP address and the local .pem file for making the connection.

    • Run the following commands for preparing the virtual machine:

      sudo apt update
      sudo apt install apt-transport-https ca-certificates curl software-properties-common
      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
      sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
      sudo apt update
      sudo apt install docker-ce
      sudo usermod -aG docker ${USER}
      sudo curl -L "https://github.com/docker/compose/releases/download/1.28.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    sudo chmod +x /usr/local/bin/docker-compose
  • Make a release in GitHub:

    • On the right-hand side of the main Code section of our repository, there is a section called Releases. It will be needed to add a new version following the version naming convention.

    • Once the release is made, some GitHub actions will be triggered, and the containers will be tested and running once everything finishes.

    • If some test fails during the process, deployment will be automatically aborted.

7.2. Infrastructure

General view of system’s infrastructure

deployment diagram

7.3. Infrastructure Level 1 - Azure Ubuntu Server

Describe (usually in a combination of diagrams, tables, and text):

  • distribution of a system to multiple locations, environments, computers, processors, .., as well as physical connections between them

  • important justifications or motivations for this deployment structure

  • quality and/or performance features of this infrastructure

  • mapping of software artifacts to elements of this infrastructure

For multiple environments or alternative deployments please copy and adapt this section of arc42 for all relevant environments.

<Overview Diagram>

Motivation

<explanation in text form>

Quality and/or Performance Features

<explanation in text form>

Mapping of Building Blocks to Infrastructure

<description of the mapping>

The Ubuntu server allows us to have a isolated machine with the minimal required configuration and installations for running our services. Having our server on Azure, allows us to minimize the costs of having that machine running, as well as to avoid taking care of some responsabilities such as security, availability or maintainance.

7.4. Infrastructure Level 2 - Docker

Here you can include the internal structure of (some) infrastructure elements from level 1.

Please copy the structure from level 1 for each selected element.

==== <Infrastructure Element 1>

<diagram + explanation>

==== <Infrastructure Element 2>

<diagram + explanation>

…​

==== <Infrastructure Element n>

<diagram + explanation>

Instead of having a virtual machine for running the whole application by itself, the application is splitted into different services that can be completely isolated. Docker allows us to create containers with the minimum amount of resources needed for running that specific service, such that resources are not wasted and services that could be more used do not collapse others. Each container contains the specific docker image for running the specific service. Each implemented service will be isolated at deploy time, so there is no need of making the services at the same programming language or following the same architectural patterns, and responses will be responded through different independent endpoints.

The virtual machine will contain as many containers as services in the application.

For now, the project contains: Web application service running on port 3000 Gateway (middleware) service running on port 8000 Wikidata API running on port 8001 Users API running on port 8003 Mongo DB server running on port 27017 Prometheus running on port 9090 for monitoring ** Grafana running on port 9091 for analytics and monitoring

7.5. Infrastructure Level 3 - GitHub actions

GitHub actions will provide us with continuous automatic delivery and integration, automating the deployment phase at each release.

7.6. Motivation

In the deployment view of our software architecture, we delineate the physical deployment of our system components across various environments. At the core of our deployment strategy is the utilization of cloud-based infrastructure, specifically leveraging Azure for its robustness and scalability. Our server components, including web applications, gateway, user services, and MongoDB servers, are encapsulated within Docker containers to ensure portability and consistency across deployments. Additionally, we employ Azure’s built-in services for auto-scaling, and traffic management to optimize performance and reliability. Continuous integration and deployment pipelines are established using tools like Jenkins or Azure DevOps, facilitating seamless updates and releases of our system components. Monitoring and logging solutions, such as Prometheus and Grafana, are integrated to provide insights into system health and performance. Overall, our deployment view showcases a resilient, scalable, and automated deployment architecture tailored to meet the demands of our system’s evolving requirements.

7.7. Mapping of Building Blocks into Infrastructure

Name Responsibility

Frontend

 Web App container opened in port 3000.

User Management

 User service container.

Wikidata Service

 Wikidata service container.

Gateway

API Gateway service opened in port 8000.

8. Cross-cutting Concepts

Content

This section describes overall, principal regulations and solution ideas that are relevant in multiple parts (= cross-cutting) of your system. Such concepts are often related to multiple building blocks. They can include many different topics, such as

  • models, especially domain models

  • architecture or design patterns

  • rules for using specific technology

  • principal, often technical decisions of an overarching (= cross-cutting) nature

  • implementation rules

Motivation

Concepts form the basis for conceptual integrity (consistency, homogeneity) of the architecture. Thus, they are an important contribution to achieve inner qualities of your system.

Some of these concepts cannot be assigned to individual building blocks, e.g. security or safety.

Form

The form can be varied:

  • concept papers with any kind of structure

  • cross-cutting model excerpts or scenarios using notations of the architecture views

  • sample implementations, especially for technical concepts

  • reference to typical usage of standard frameworks (e.g. using Hibernate for object/relational mapping)

Structure

A potential (but not mandatory) structure for this section could be:

  • Domain concepts

  • User Experience concepts (UX)

  • Safety and security concepts

  • Architecture and design patterns

  • "Under-the-hood"

  • development concepts

  • operational concepts

Note: it might be difficult to assign individual concepts to one specific topic on this list.

Possible topics for crosscutting concepts
Further Information

See Concepts in the arc42 documentation.

8.1. Domain Concepts

8.1.1. Question

In our app, the question is always represent as a data structure with the next format:

{
    text: "What is the capital of Asturias?"
    answers: [Gijón,Oviedo,Cangas de Onís]
    correctAnswer: 1
    wikiLink: "https://www.wikidata.org/wiki/Q14317"
}

Benefits:

  • Consistency: This format ensures consistent representation of questions throughout the app, reducing errors and simplifying code maintenance.

  • Clarity: By explicitly defining the data format, developers can clearly understand how to work with question data within the codebase.

  • Flexibility: By defining an array and a correct index, the array could be of multiple sizes.

8.2. UX Concepts

8.2.1. Color

We decided to use a color palette of 4 colors:

Name Color

Background

#191919

Text

#f2ecff

Primary

#00c896

Danger

#e35a2a

Benefits:

  • Clarity: Thanks to this simple palette it is very easy to identify when something is correct or not.

  • Consistency: By using a limited set of colors, the overall visual design of the application will be cohesive and harmonious.

  • Accessibility: The chosen colors provide good contrast ratios, ensuring the content is readable and accessible for users with various visual abilities.

  • Branding: The selected colors can be used to reinforce the application’s brand identity and make it recognizable to users.

The chosen color palette strikes a balance between functionality, aesthetics, and branding. The dark background with light text provides a high-contrast theme that is easy on the eyes, while the primary and danger colors are used sparingly to highlight important information or actions.

8.3. Development concepts

8.3.1. Testing and Monitoring

We performed Load Testing, Unit Testing, End-to-end testing and Code Analysis with SonarCloud. The results obtained can be checked here: Appendix I - Testing results

8.3.2. Configurability

The application has simple configurable game features for selecting between two game modes (normal/usual and trivia game mode) and two difficulty levels (easy and hard difficulties). - Normal mode game consists of 10 random questions with an amount of time to answer the question before losing the possibility to answer. Easy and hard modes differ on the amount of time that the user has to answer the question. - Trivia mode game consists of 10 questions, which are generated based on the resulting category of rolling a dice. There are 6 possible categories: sports, science, history, geography and entertainment. Additionally, there is an option at the main application view where random music can be played.

8.3.3. Data access

The development team has followed two different approaches for supporting data access from the running application for development and production. While developing the application, teh development team decided to create a shared database located in the cloud which allowed us to work locally with the same data by means of a key string. In order to move our application into production by means of deploying it into an Azure virtual machine running with Docker containers, the development team created a mongodb container with an associated volumen for making the data persistent.

9. Architecture Decisions

Along the process of developing the application, decisions had to be taken as problems arise. These are the final decicision that we have made according to their advantages. If you want a description about each of the technologies we have chosen, go to the Glossary of the documentation.

9.1. Microservices architecture

The team opted for a microservices architecture as the foundation of our system due to the advantages it provides. By breaking down our application into smaller, independently deployable services, we gain scalability and flexibility. Each microservice operates autonomously, allowing us to develop, deploy, and update components without affecting the entire system. Furthermore, microservices promote technology diversity, enabling us to choose the best tools for each service’s specific needs. By means of an API Gateway, all the services can comunicate to server all the services as only one.

9.2. API Gateway

To streamline communication between our backend services, we’ve implemented an API gateway. This gateway acts as a central hub, providing a unified entry point for all client requests. By consolidating communication through the API gateway, we simplify access control, load balancing, and monitoring across our system. This approach enhances scalability and maintainability while enabling us to implement cross-cutting concerns such as authentication and rate limiting in a centralized manner. The API gateway plays a pivotal role in orchestrating interactions between services, optimizing performance, and ensuring a cohesive and reliable architecture.

9.3. Docker containers

Docker containers are used for our web application and an API gateway for inter-service communication, driven by their portability, scalability, and maintainability advantages. Docker ensures consistent deployment across environments, facilitating independent scaling of services. By routing communication through the API gateway, we centralize access control and monitoring, simplifying management and promoting modularity and flexibility. This approach optimizes system management, scalability, and interoperability, aligning with our project’s architectural goals while enhancing monitoring capabilities for streamlined performance tracking and issue resolution.

9.4. React & Tailwind CSS

We’re building our web application with React and Tailwind CSS for their efficiency and modern development approach. React’s component-based architecture simplifies UI creation and updates, while Tailwind CSS’s utility-first framework streamlines styling for rapid prototyping and consistent design. This combination allows us to create a visually appealing and highly responsive web application efficiently, aligning with our goal of delivering a modern, user-friendly interface while maintaining flexibility and scalability in our frontend development process.

9.5. Node.js

Initially the Wikidata service for generating game questions was developed using .NET. However, encountering deployment issues with Docker in Azure prompted us to migrate all backend services to Node.js and Express. This strategic move ensures a smoother, more reliable and even more comfortable deployment process, enhancing system reliability and maintainability.

We’ve chosen Node.js with Express for developing all backend services due to its lightweight, efficient, and scalable nature thanks to modularity. Node.js offers non-blocking I/O operations, enabling high concurrency and responsiveness, which is crucial for handling asynchronous tasks common in web applications. Express, a minimalist web framework for Node.js, simplifies the development of robust and RESTful APIs, providing essential features like routing, middleware support, and error handling. Additionally, the vibrant ecosystem of Node.js libraries and modules enhances productivity and enables seamless integration with other technologies and services. Overall, Node.js with Express empowers us to build performant, scalable, and maintainable backend services that align with our project’s requirements and architectural goals.

The following table contains the most interesting the design decisions that we have taken with their advantages and disadvantages:

Table 3. Architectural Records
Decision Advantages Disadvantages React.js

Quite easy to learn in comparison to other front-end libraries. Increasingly popular in the web.

Not all of us know about its usage

Tailwind CSS

Consistent and unified design system and its ability to speed up the development process. Rapidly growing utility-first CSS framework

Quite new for most of us

MongoDB

It does not need to be started manually. Free and easy to understand

We are quite new with MongoDB.

Docker

Fast deployment, ease of moving/maintaining your applications. Easy as we already have DockerFiles example

We do not have much experience using Docker

PlantUML

Allows drawing diagrams very easily, with a simple syntax.

Does not allow as much control over the exact layout of the elements in the diagram as other tools.

Node.js

For small applications it’s a very fast techonology. It’s easy to learn and we already know a bit about it

Its performance is reduced with heavy computational tasks

Wikidata API also in Node.js

Better project structure. Same language as users API. Easier for us to deploy it

Its performance is reduced with heavy computational tasks

Contents

Important, expensive, large scale or risky architecture decisions including rationales. With "decisions" we mean selecting one alternative based on given criteria.

Please use your judgement to decide whether an architectural decision should be documented here in this central section or whether you better document it locally (e.g. within the white box template of one building block).

Avoid redundancy. Refer to section 4, where you already captured the most important decisions of your architecture.

Motivation

Stakeholders of your system should be able to comprehend and retrace your decisions.

Form

Various options:

  • ADR (Documenting Architecture Decisions) for every important decision

  • List or table, ordered by importance and consequences or:

  • more detailed in form of separate sections per decision

Further Information

See Architecture Decisions in the arc42 documentation. There you will find links and examples about ADR.

10. Quality Requirements

Content

This section contains all quality requirements as quality tree with scenarios. The most important ones have already been described in section 1.2. (quality goals)

Here you can also capture quality requirements with lesser priority, which will not create high risks when they are not fully achieved.

Motivation

Since quality requirements will have a lot of influence on architectural decisions you should know for every stakeholder what is really important to them, concrete and measurable.

Further Information

See Quality Requirements in the arc42 documentation.

10.1. Quality Tree

Diagram
Content

The quality tree (as defined in ATAM – Architecture Tradeoff Analysis Method) with quality/evaluation scenarios as leafs.

Motivation

The tree structure with priorities provides an overview for a sometimes large number of quality requirements.

Form

The quality tree is a high-level overview of the quality goals and requirements:

  • tree-like refinement of the term "quality". Use "quality" or "usefulness" as a root

  • a mind map with quality categories as main branches

In any case the tree should include links to the scenarios of the following section.

10.2. Quality Scenarios

Contents

Concretization of (sometimes vague or implicit) quality requirements using (quality) scenarios.

These scenarios describe what should happen when a stimulus arrives at the system.

For architects, two kinds of scenarios are important:

  • Usage scenarios (also called application scenarios or use case scenarios) describe the system’s runtime reaction to a certain stimulus. This also includes scenarios that describe the system’s efficiency or performance. Example: The system reacts to a user’s request within one second.

  • Change scenarios describe a modification of the system or of its immediate environment. Example: Additional functionality is implemented or requirements for a quality attribute change.

Motivation

Scenarios make quality requirements concrete and allow to more easily measure or decide whether they are fulfilled.

Especially when you want to assess your architecture using methods like ATAM you need to describe your quality goals (from section 1.2) more precisely down to a level of scenarios that can be discussed and evaluated.

Form

Tabular or free form text.

Usage scenarios

Quality goal Motivation Usage scenario Priority

Reliability

The application must provide users with constistent performance and predictable results.

When users access the web it must behave the same every time giving the almost equal results and response times.

Very high

Performance

The application must have a reasonable response time. Slow applications are not positively popular in society.

The application must be able to give a response time of at least 5 seconds with 10 concurrent users.

Very high

Security

Our web must be secure not only to protect data but to provide a realiable solution to our users. If we can’t assure our clients the web is secure, no one will use it.

Data will be only accessible by its owner. If a user tries to access other people’s information, the system will deny the operation, as data will be stored in a secure system.

Very high

Usability

To make the website stand out from the competition, it must be easy to use, attract attention and be aestethic.

The user must be able to do identify the game elements shown in the screen as well as the menu for the different functionalities as the view profile or the logout ones.

Very high

Portability

To reach the maximum number of users the application must work in the maximum number of infrastructures.

The game experience and functionalities must be the same independently from the device which the user is connecting from.

High

Testability

All features of the application must be testable in order to verify that the web built was the one asked for.

The unit tests passed by the developers must generate an observable output.

High

Availability

The application must be available 24 hours a day all weeks.

The user must be able to play at any time because it will be its free time.

High

Change scenarios

Quality goal Motivation Change scenario Priority

Maintainability

An application should be maintainable to remain usable over the years and to be able to improve functionalities and to fix misfunctionalities.

When developers must introduce a new feature to the web, they should be able to do it without changing the software architecture.

High

11. Risks and Technical Debts

This section contains a list of identified risks that the project will face during its lifetime. In addition to it, each particular risk comes with a brief self-explanatory description, the probability of its occurrence, its impact on the project and a solution on how to minimize it or mitigate it.

11.1. Risks

Risk Description Probability Impact Solution

Complications with the project characteristics

Almost everyone on the team has never done a project of such a size, and there may be some trouble.

Medium

High

Each member will try to maximize its knowledge on some aspect of the project in the first weeks, in order to be able to be something similar to a leader in each one of the posible key aspects of the project.

Problems with wikidata

The team only used wikidata once before and not even everyone of us.

High

Very high

We must read some documentation and try out some basic features to familiarize with wikidata.

Teamwork issues

The members of the team have never worked together. This may cause problems such as lack of communication or trust in each other’s work.

Medium

Medium

We will try to keep in touch a few times a week, to see each ones progress on our tasks and we will try to build some confidence with each other throughout the development of the project as most of us met on this subject.

Differences with technologies

There are some members that don’t know as much in some aspects of the development

Medium

Low

The members that know more on each of the aspects will help the others understand the things they could find difficult.

Deadlines

The project is based on some deadline days when our work is presented

Very high

High

The team will follow the planification of the project to avoid problems on each one of the deadlines.

11.2. Technical Debts

Wikidata

The day when wikidata is outdated could come, and the app could still be working. It’s quite difficult but it could happen.

Availability

The fact of using wikidata for retrieving the questions could mean that if the service of wikidata fails for some reason, the app would be failing as well.

Contents

A list of identified technical risks or technical debts, ordered by priority

Motivation

“Risk management is project management for grown-ups” (Tim Lister, Atlantic Systems Guild.)

This should be your motto for systematic detection and evaluation of risks and technical debts in the architecture, which will be needed by management stakeholders (e.g. project managers, product owners) as part of the overall risk analysis and measurement planning.

Form

List of risks and/or technical debts, probably including suggested measures to minimize, mitigate or avoid risks or reduce technical debts.

Further Information

See Risks and Technical Debt in the arc42 documentation.

12. Glossary

Contents

The most important domain and technical terms that your stakeholders use when discussing the system.

You can also see the glossary as source for translations if you work in multi-language teams.

Motivation

You should clearly define your terms, so that all stakeholders

  • have an identical understanding of these terms

  • do not use synonyms and homonyms

Form

A table with columns <Term> and <Definition>.

Potentially more columns in case you need translations.

Further Information

See Glossary in the arc42 documentation.

Term Definition

STAP

A web application where users can register and login in order to play. The game consists on answering a number of questions with different types and subjects obtaining a prize for each question well answered.

Wikidata

It is a collaborative, free and open knowledge base that stores structured information. It aims to provide a common source of data that can be used by Wikimedia projects and anyone else, under a public domain license.

Saber y ganar

It is a Spanish television quiz show. It involves contestants competing in several rounds of questions to test their knowledge in different categories.

Diagram

A visual representation of information, data flow, processes, or systems using symbols, shapes, and lines to illustrate relationships, connections, and concepts.

Front-ent

Refers to the part of a software application or website that users interact with directly. It encompasses the user interface, design elements, and functionality visible to users.

Back-end

The behind-the-scenes part of a software application or website responsible for handling data processing, server-side logic, and database interactions. It includes the server, database, and application logic that users do not directly interact with.

Microservices

An architectural approach to building software applications as a collection of small, loosely coupled services. Each service is designed to perform a specific business function and can be developed, deployed, and scaled independently.

Stakeholder

Individuals or groups with an interest or concern in a project, product, or organization. Stakeholders may include any party affected by or involved in the outcomes of a particular initiative.

Docker

A platform for developing, shipping, and running applications in containers. It allows developers to package applications and their dependencies into standardized units called containers, providing a consistent environment for software deployment across different computing environments.

Deployment

The process of making a software application, website, or service available for use. It involves taking the codebase of a developed application and installing it onto servers or other computing infrastructure so that it can be accessed by end-users.

MongoDB

A popular open-source NoSQL database management system known for its flexibility, scalability, and ease of use. It stores data in a flexible, JSON-like format called BSON and is commonly used for applications requiring high-volume data storage and real-time data processing.

API (Application programming interface)

Set of rules and protocols that allows different software applications to communicate and interact with each other. APIs define the methods and data formats that applications can use to request and exchange information. They enable developers to access the functionality of other software components or services without having to understand their internal workings. APIs are commonly used to integrate third-party services, access data from remote servers, and build modular and interoperable software systems.

13. Appendix I - Testing results

Contents

Showing the results of all testing results

13.1. Testing and Monitoring

  • Unit Testing: Unit testing plays a crucial role in our development workflow, providing confidence in the stability and functionality of our application. By utilizing .test.tsx files for our React components and .test.js files for our Node.js backend, we ensure that our code behaves as expected, regardless of the environment. This comprehensive approach to testing allows us to catch and address issues early in the development cycle, leading to higher-quality software and smoother deployment processes.

  • Code Analysis: We employed the SonarCloud tool to monitor the code covered by these tests, while Continuous Integration practices were implemented using GitHub Actions.

    Results for web application:
Code coverage for the web application code
Time results for the web application code test
Results for gateway service:
Code coverage for the gateway service code
Results for users service:
Code coverage for the users service code
Results for wikidata service:
Code coverage for the wikidata service code
  • Load Testing and Monitoring: We used Gattling for recording user simulations that consist on login into the application, playing the existing game modes and checking the statistics and leaderboards. Then, Gatling allowed us to also measure the performance of the application and the average response times when creating 2, 5, 10 and 25 users per second during 60 seconds performing the previously mentioned simulations, using 2 different Azure machines. Particularly, load testing has been performed using a 1-cpu machine with almost 1GB of RAM memory and also, using a 2-cpu machine with 8GB of RAM memory.

    • 1-cpu machine

      With this machine all requests were answered quickly when 2 users per second where created (120 users in total) with almost all requests responding in less than 1 second with a minimum response time of just 24 milliseconds and a maximum reponse time of 1.6 seconds.
2 users per second in 60 seconds user-simulation overall results with 1-cpu machine
2 users per second in 60 seconds user-simulation specific graph results with 1-cpu machine
When creating 5 users per second (350 users in total), the responses took almost all of them (around 80%) less than a second with a minimum response time of 24 milliseconds, but with a maximum response time of 10 seconds which is a huge amount of time for a web application.
5 users per second in 60 seconds user-simulation overall results with 1-cpu machine
5 users per second in 60 seconds user-simulation specific graph results with 1-cpu machine
Unfortunately, when creating a load of 10 user per second (600 users in total) 41% of the responses were failling and around 70% of the responses where failing or taking more than a second to be answered. For sure, when creating even more load, almost all response were going to fail.
10 users per second in 60 seconds user-simulation overall results with 1-cpu machine
10 users per second in 60 seconds user-simulation specific graph results with 1-cpu machine
With this 1-cpu and 1GB of RAM azure machine we could afford around 200 hundred users making constant resquests without having a denial of service and providing reasonable requests' response times.
  • 2-cpu machine

    With this machine all requests were answered quickly when 2 users per second where created (120 users in total) with almost all requests responding in less than 1 second with a minimum response time of just 24 milliseconds and a maximum reponse time of 1.8 seconds.
2 users per second in 60 seconds user-simulation overall results
2 users per second in 60 seconds user-simulation specific graph results
When creating 5 users per second (350 users in total), the responses took almost all of them (around 80%) less than a second with a minimum response time of 24 milliseconds, but with a maximum response time of 10 seconds which is exactly the same time we obtained with the other machine and a similar mean response time.
5 users per second in 60 seconds user-simulation overall results with 2-cpu machine
5 users per second in 60 seconds user-simulation specific graph results with 2-cpu machine
When creating a load of 10 user per second (600 users in total), no response failed although the maximum response time was 58 seconds with a mean response time of 2 seconds, which meant that almost every respone took less than a second.
10 users per second in 60 seconds user-simulation overall results with 2-cpu machine
10 users per second in 60 seconds user-simulation specific graph results with 2-cpu machine
Finally, when creating a load of 25 user per second (1500 users in total), just a 7% of the total amount of response failed and the maximum response time was 60 seconds with a mean response time of 5 seconds.
25 users per second in 60 seconds user-simulation overall results with 2-cpu machine
25 users per second in 60 seconds user-simulation specific graph results with 2-cpu machine
Using this 2-cpu and 8GB machine, response times are not enhaced when using more powerfull hardware and most of the work should be done programatically by improving our software.
But better hardware allows us to support much more load on the application, which means more users playing at the same time. This time, we could support more 400, but less than 800 simultaneous users.
Taking a look at the specific graph results of each of thesimulations performed, most of load is always provoked at the beggining of the simulation when the users have to login.
Then, when users are playing some games the amount of response is reduced a lot since all the information for playing the game is asked at the beggining.
As a general conclusion, following the azure payment plan for virtual machines: With low load requirements and just paying around 35$ each month we could afford a 1-cpu and 1GB RAM memory web server supporting around 200 simultaneous users using the application at the same time.
On the contrary, if higher load requirements are needed and paying around 100$ for a 2-cpu and 8GB of RAM memory server, the amount of supported users is more than duplicated supporting more than 400 simultaneous users.
  • E2e Testing: We used behavior-driven development scenarios written in the Gherking language as a basis for our end-to-end tests. We developed seven e2e tests that check the main paths for our application, making sure everything works fine when integrated together. We test every endpoint on the app up until the information may depend in each execution of the petition.

e2e tests developed, running on our local machine
e2e tests results, running on our local machine