Find a file
2025-01-04 13:03:22 +01:00
src Fix typo incomming -> incoming 2025-01-04 13:03:22 +01:00
.dockerignore Add docker 2024-12-29 17:08:13 +01:00
.editorconfig Initial Commit 2024-11-27 15:52:17 +01:00
.gitattributes Initial Commit 2024-11-27 15:52:17 +01:00
.gitignore Initial Commit 2024-11-27 15:52:17 +01:00
.pre-commit-config.yaml Add pre-commit, linter, formatter & type-checker 2024-11-27 15:59:38 +01:00
.python-version Initial Commit 2024-11-27 15:52:17 +01:00
docker-compose.yml Add docker 2024-12-29 17:08:13 +01:00
Dockerfile Add docker 2024-12-29 17:08:13 +01:00
LICENSE.txt Initial Commit 2024-11-27 15:52:17 +01:00
populate_db.py Create an invited event when populating db 2025-01-02 14:46:26 +01:00
pyproject.toml Add color validation for categories 2024-12-25 20:24:11 +01:00
README.md Mention the swagger documentation 2024-12-30 19:02:48 +01:00
requirements-dev.lock Add color validation for categories 2024-12-25 20:24:11 +01:00
requirements.lock Add color validation for categories 2024-12-25 20:24:11 +01:00

Event Management Backend

The Event Management Backend is a REST API developed using the FastAPI framework. It serves as the backend for the Event Management System, providing the essential data and functionality for a calendar-based event management application.

This project was created as part of the final assignment for AP7PD, which specifically required a backend implementation. For AP7MP, the focus was on the frontend, however, since the same frontend project is being reused, this backend is introduced as a dependency. While not formally part of the graded assignment for AP7MP, the backend plays a crucial role in providing the necessary functionality.

Overview / Key Features

The backend is designed with robust and secure API functionality to support the Event Management System. It includes the following key features:

  • User Management: Provides endpoints for user registration and login. Passwords are securely stored as salted bcrypt hashes, ensuring strong protection against unauthorized access.

  • Token-Based Authentication: Implements JWT tokens using the Bearer scheme for secure and stateless authentication. The system supports both access tokens (short-lived, typically one hour) and refresh tokens (long-lived, typically one month), allowing users to obtain new access tokens without reauthentication.

  • Session and Token Management: Tokens are server-side validated and tracked in a dedicated database table, enabling advanced session management features:

    • Invalidation endpoints allow users to revoke access to individual tokens (e.g., in case of a lost device).
    • Session endpoints provide details on token creation, expiration, and active sessions
  • Security-First Design: Every endpoint includes proper security measures to prevent unauthorized access to user data. While the architecture is built to potentially allow for future admin-level accounts, no such functionality is included in this implementation.

  • Category Management: Users can manage their categories through dedicated endpoints, including creating, viewing, and deleting their custom categories.

  • Event Management: Includes comprehensive endpoints for managing events. Events support multiple categories and can be customized to suit various use cases.

  • Invitation System: Provides endpoints for inviting users to events. Invitations generate notifications, and invitees can accept or decline invitations. Accepted invitations automatically add the invitee as an attendee, allowing events to appear on their calendar.

  • Notification System: Notifications are produced for specific actions, such as:

    • A user being invited to an event.
    • The invitor being notified when the invitee accepts or declines their invitation. Notifications can be retrieved through dedicated endpoints.
    • There is currently no support for creating custom notifications (e.g. by an administrator) through endpoints, however, the project is designed with scalability in mind and support such functionality would be fairly easy to add in the future, if needed.
  • Logging Support: Basic logging functionality records important actions and aids in debugging. While functional, logging could be expanded for more comprehensive coverage in future iterations.

  • Deployment Ready: The backend includes a Dockerfile and docker-compose configuration for simple and reproducible deployment, complete with a MongoDB instance.

Technology Stack

The backend is implemented using the following technologies:

  • Python: Version 3.12 or higher.
  • FastAPI: A modern, fast (high-performance), web framework for building APIs with Python.
  • MongoDB: NoSQL database used to store events, users, and categories.
  • motor: Asynchronous MongoDB driver for Python.
  • Beanie: An async ODM (Object Document Mapper) for MongoDB with Motor.
  • Rye: A dependency management tool for Python.
  • Docker: Used for containerization and deployment.

Frontend Integration

This backend is designed to work seamlessly with the Event Management Frontend, a native Android application written in Kotlin. The frontend interacts with this backend to manage users, events, attendees, and categories.

You can find the source code and detailed information about the frontend at: Event Management Frontend Repository

Installation

Tip

Instead of manually installing the project, you can also use the provided Dockerfile and docker-compose.yml file. See the Docker section for more information. This will very likely be the easiest and quickest way to get the project up and running. If you're not interested in the docker install, keep reading.

If you only wish to run the project and you're not interested in doing some further development on it, you can simply use pip and venv to install all the necessary dependencies:

Note

Make sure you're in the root directory of the project (the same folder that this README file is in).

# Create & activate python virtual environment
python -m venv .venv
. .venv/bin/activate

# Install the dependencies
python -m pip install -e .

This will only install the runtime dependencies, necessary to actually run the code. (The development dependencies will not be installed.) The dependencies will be installed into their own isolated virtual environment, which won't interfere with the system-wide python dependencies.

For development

This project uses rye, which is a dependency management tool for python. You will need to install it. (On Arch Linux, you can run pacman -S rye.)

Once you get rye installed, go to the project's root directory and run:

rye sync

This will install all of the necessary dependencies you'll need to run this project, including the development dependencies in a virtual environment. To then activate this environment, you can run:

. .venv/bin/activate

Running

To run the project, make sure you've activated your virtual environment first, then simply execute:

poe run

Note that by default, this will start listening on localhost:8000, if you wish to change this, you can use the --host and --port flags, like so:

poe run --host 0.0.0.0 --port 8080

(Changing the host from localhost to 0.0.0.0 is necessary to make the server accessible from other devices on the network)

If you wish to run the project in development mode (with auto-reload on change), you can instead use:

poe run-dev

Important

You will also need to have configured the project first, most notably, you'll need to set a MongoDB connection string. Which also obviously means you'll need to have a MongoDB instance running. You can read more about how to configure the project in the Configuration section.

Docker

As an alternative to manually installing & running the project, you can also use Docker, which simplifies deployment by handling dependencies automatically in a containerized environment that will work consistently pretty much anywhere. This approach is especially convenient if you're not interested in doing any further development of the project further. Additionally, the provided Docker Compose file includes a MongoDB instance, eliminating the need to set it up yourself.

First, you will need to have Docker and Docker compose installed. Once you have that, you can run:

sudo docker-compose up

This will build the docker image from the attached Dockerfile and pull another image for the MongoDB instance. Once done, it will start the backend server and the MongoDB instance. By default, the backend will be available at port 8000. Feel free to edit the docker-compose.yml file and change it.

Note

Note that if you change the code, it is necessary to also rebuild the docker image. You can do that by adding the --build flag to the command:

sudo docker up --build

If you wish to run the containers in the background, you can add the -d flag:

sudo docker-compose up -d

Important

You will also need to have configured the project first. For docker install though, you don't need to set the MongoDB connection string yourself, as the MongoDB instance is running through docker as well, and the connection string is set from there. That said, there are still some other required values that you will need to set. You can read more about how to configure the project in the Configuration section.

Note

By default, the docker container will always use a brand new database. If you wish to persist the database across runs, you will need to modify the docker-compose file and add a docker volume or a directory bind mount. By default, MongoDB will store the data in /data/db directory.

Configuration

To configure the project, you can either set the environment variables manually or create a .env file in the root directory of the project (the same folder that this README file is in).

Currently, you will need to set 2 environment variables:

  • JWT_SECRET_TOKEN: This is the secret token that will be used to sign the JWT tokens. This should be a long, random string. I'd recommend generating it with openssl rand -hex 32 command (if you're on windows, you can use openssl from WSL or generate it through some other means, or use the one in the example later though, although I wouldn't recommend that for production use).
  • MONGODB_URI: This is the connection string for the MongoDB instance. This should be in the format mongodb://username:password@host:port/database. If you're using the admin authsource, you can also add ?authSource=admin suffix to this string. Note that if you're using the docker-compose, you don't need to set this value, as it's already set in the docker-compose.yml file.

There are also some optional environment variables that you can set:

  • DEBUG: This is a boolean value that will enable the debug mode in the FastAPI application. This is useful for development, but you should never enable this in production. If you don't set this value, it will default to 0 (false). To enable it, set this to 1 (true).
  • LOG_FILE: If set, also write the logs into given file, otherwise only write to stdout (printing the logs).
  • TRACE_LEVEL_FILTER: Configuration for trace level logging, see: trace logs config section

Example .env file

The .env file should look something like this:

DEBUG=0
JWT_SECRET_TOKEN=e51ca69e8d42422c7d2f94bc14e9aaaf294bb55a881354633f5e44f2dc9fde71
MONGODB_URI=mongodb://root:test@localhost:27017/my-cool-database?authSource=admin

Trace logs config

We have a custom trace log level for the project, which can be used for debugging purposes. This level is below debug and can only be enabled if DEBUG=1. This log level is controlled through the TRACE_LEVEL_FILTER environment variable. It works in the following way:

Note

Due to the time constrains imposed on the project, the logging is sometimes somewhat lacking and there actually aren't that many trace level logs. This is something that could be improved in the future, if further development is done.

  • If DEBUG=0, the TRACE_LEVEL_FILTER variable is ignored, regardless of it's value.
  • If TRACE_LEVEL_FILTER is not set, no trace logs will appear (debug logs only).
  • If TRACE_LEVEL_FILTER is set to *, the root logger will be set to TRACE level. All trace logs will appear.
  • When TRACE_LEVEL_FILTER is set to a list of logger names, delimited by a comma, each of the specified loggers will be set to TRACE level, leaving the rest at DEBUG level. For example: TRACE_LEVEL_FILTER="src.api.foo.foobar,src.api.bar.barfoo"
  • When TRACE_LEVEL_FILTER starts with a ! symbol, followed by a list of loggers, the root logger will be set to TRACE level, with the specified loggers being set to DEBUG level.

MongoDB

As you probably noticed, the project uses MongoDB as the database. If you're not familiar with MongoDB, it's a NoSQL database, which is very easy to use and set up. You can find more information about it on the official MongoDB website.

To set up a MongoDB instance, you can either use the provided docker-compose file, in which case you don't need to do anything, or set it up manually. For manual setup, you can follow the official installation guide.

Quick MongoDB setup

If you just need a quick MonogDB instance that you can spin up during the development, I'd recommend using Docker. Note that you don't need to follow the Docker installation for the entire project, e.g. using docker-compose, you can run the project normally and just host the MongDB instance through Docker. To do this, simply run:

sudo docker run -d --name mongodb \
  -e MONGO_INITDB_ROOT_USERNAME=root \
  -e MONGO_INITDB_ROOT_PASSWORD=test \
  -p 27017:27017 \
  mongo:latest

This will start a MongoDB instance with the root user having the username root and password test, using the admin authSource. The instance will be available on port 27017, so you can use it with the following connection string:

MONGODB_URI=mongodb://root:test@localhost:27017/my-cool-database?authSource=admin

Once you're done, you can stop the instance by running:

sudo docker stop mongodb

To also remove the container, you can run:

sudo docker remove mongodb

Note

This MongoDB instance will not persist the data across runs. If you remove the container, all the data will be lost. If you need the data to persist, you can use a docker volume or a bind mount, as already explained in the Docker section.

Quickly populate the database

During the development, it's often useful to have some data in the database to work with. To quickly populate the database with some, you can use the provided populate_db.py script. To run it, make sure you have activated the virtual environment and then run:

python populate_db.py

Documentation

The project includes a Swagger UI documentation, which is automatically generated by FastAPI through the OpenAPI schema.

You can access it on the /docs endpoint, e.g. if you're running the project on localhost:8000, you can access the documentation at http://localhost:8000/docs.