event-management-backend/README.md

321 lines
15 KiB
Markdown
Raw Permalink Normal View History

# Event Management Backend
2024-11-27 14:52:17 +00:00
The Event Management Backend is a REST API developed using the FastAPI framework. It serves as the backend for the Event
Management System, providing the essential data and functionality for a calendar-based event management application.
2024-11-27 14:52:17 +00:00
This project was created as part of the final assignment for **AP7PD**, which specifically required a backend
implementation. For **AP7MT**, the focus was on the frontend, however, since the same frontend project is being reused,
this backend is introduced as a dependency. While not formally part of the graded assignment for AP7MT, the backend
plays a crucial role in providing the necessary functionality.
## Overview / Key Features
2024-11-27 14:52:17 +00:00
The backend is designed with robust and secure API functionality to support the Event Management System. It includes the
following key features:
2024-11-27 14:52:17 +00:00
- **User Management:** Provides endpoints for user registration and login. Passwords are securely stored as salted
bcrypt hashes, ensuring strong protection against unauthorized access.
- **Token-Based Authentication:** Implements JWT tokens using the Bearer scheme for secure and stateless authentication.
The system supports both access tokens (short-lived, typically one hour) and refresh tokens (long-lived, typically one
month), allowing users to obtain new access tokens without reauthentication.
- **Session and Token Management:** Tokens are server-side validated and tracked in a dedicated database table, enabling
advanced session management features:
2024-11-27 14:52:17 +00:00
- Invalidation endpoints allow users to revoke access to individual tokens (e.g., in case of a lost device).
- Session endpoints provide details on token creation, expiration, and active sessions
- **Security-First Design:** Every endpoint includes proper security measures to prevent unauthorized access to user
data. While the architecture is built to potentially allow for future admin-level accounts, no such functionality is
included in this implementation.
- **Category Management:** Users can manage their categories through dedicated endpoints, including creating, viewing,
and deleting their custom categories.
- **Event Management:** Includes comprehensive endpoints for managing events. Events support multiple categories and can
be customized to suit various use cases.
- **Invitation System:** Provides endpoints for inviting users to events. Invitations generate notifications, and
invitees can accept or decline invitations. Accepted invitations automatically add the invitee as an attendee, allowing
events to appear on their calendar.
- **Notification System:** Notifications are produced for specific actions, such as:
- A user being invited to an event.
- The invitor being notified when the invitee accepts or declines their invitation. Notifications can be retrieved through dedicated endpoints.
- There is currently no support for creating custom notifications (e.g. by an administrator) through endpoints,
however, the project is designed with scalability in mind and support such functionality would be fairly easy to add
in the future, if needed.
- **Logging Support:** Basic logging functionality records important actions and aids in debugging. While functional,
logging could be expanded for more comprehensive coverage in future iterations.
- **Deployment Ready:** The backend includes a Dockerfile and docker-compose configuration for simple and reproducible
deployment, complete with a MongoDB instance.
## Technology Stack
The backend is implemented using the following technologies:
- **[Python](https://www.python.org/):** Version 3.12 or higher.
- **[FastAPI](https://fastapi.tiangolo.com/):** A modern, fast (high-performance), web framework for building APIs with
Python.
- **[MongoDB](https://www.mongodb.com)**: NoSQL database used to store events, users, and categories.
- **[motor](https://pypi.org/project/motor/):** Asynchronous MongoDB driver for Python.
- **[Beanie](https://beanie-odm.dev/):** An async ODM (Object Document Mapper) for MongoDB with Motor.
- **[Rye](https://rye.astral.sh/):** A dependency management tool for Python.
- **[Docker](https://www.docker.com/):** Used for containerization and deployment.
## Frontend Integration
This backend is designed to work seamlessly with the Event Management Frontend, a native Android application written in
Kotlin. The frontend interacts with this backend to manage users, events, attendees, and categories.
You can find the source code and detailed information about the frontend at: [Event Management Frontend
Repository](https://git.itsdrike.com/ap7pd/event-management-frontend)
2024-11-27 14:52:17 +00:00
## Installation
> [!TIP]
> Instead of manually installing the project, you can also use the provided `Dockerfile` and `docker-compose.yml` file.
> See the [Docker](#docker) section for more information. This will very likely be the easiest and quickest way to get
> the project up and running. If you're not interested in the docker install, keep reading.
If you only wish to run the project and you're not interested in doing some further development on it, you can simply
use `pip` and `venv` to install all the necessary dependencies:
> [!NOTE]
> Make sure you're in the root directory of the project (the same folder that this README file is in).
```bash
# Create & activate python virtual environment
python -m venv .venv
. .venv/bin/activate
# Install the dependencies
python -m pip install -e .
```
This will only install the runtime dependencies, necessary to actually run the code. (The development dependencies will
not be installed.) The dependencies will be installed into their own isolated virtual environment, which won't interfere
with the system-wide python dependencies.
### For development
This project uses [`rye`](https://rye.astral.sh/), which is a dependency management tool for python. You will need to
install it. (On Arch Linux, you can run `pacman -S rye`.)
Once you get `rye` installed, go to the project's root directory and run:
```bash
rye sync
```
This will install all of the necessary dependencies you'll need to run this project, including the development
dependencies in a virtual environment. To then activate this environment, you can run:
```bash
. .venv/bin/activate
```
## Running
To run the project, make sure you've activated your virtual environment first, then simply execute:
```bash
poe run
```
Note that by default, this will start listening on `localhost:8000`, if you wish to change this, you can use the `--host`
and `--port` flags, like so:
```bash
poe run --host 0.0.0.0 --port 8080
```
(Changing the host from `localhost` to `0.0.0.0` is necessary to make the server accessible from other devices on the
network)
If you wish to run the project in development mode (with auto-reload on change), you can instead use:
```bash
poe run-dev
```
> [!IMPORTANT]
> You will also need to have configured the project first, most notably, you'll need to set a MongoDB connection string.
> Which also obviously means you'll need to have a MongoDB instance running. You can read more about how to configure the
> project in the [Configuration](#configuration) section.
## Docker
2024-12-30 14:33:46 +00:00
As an alternative to manually installing & running the project, you can also use Docker, which simplifies deployment by
handling dependencies automatically in a containerized environment that will work consistently pretty much anywhere.
This approach is especially convenient if you're not interested in doing any further development of the project further.
Additionally, the provided Docker Compose file includes a MongoDB instance, eliminating the need to set it up yourself.
First, you will need to have [Docker](https://docs.docker.com/engine/install/) and [Docker
compose](https://docs.docker.com/compose/install/) installed. Once you have that, you can run:
```bash
sudo docker-compose up
```
This will build the docker image from the attached `Dockerfile` and pull another image for the MongoDB instance. Once
2024-12-30 01:57:22 +00:00
done, it will start the backend server and the MongoDB instance. By default, the backend will be available at port 8000.
Feel free to edit the `docker-compose.yml` file and change it.
2024-12-30 01:57:22 +00:00
> [!NOTE]
> Note that if you change the code, it is necessary to also rebuild the docker image. You can do that by adding the
> `--build` flag to the command:
>
> ```bash
> sudo docker up --build
> ```
If you wish to run the containers in the background, you can add the `-d` flag:
```bash
sudo docker-compose up -d
```
> [!IMPORTANT]
> You will also need to have configured the project first. For docker install though, you don't need to set the MongoDB
> connection string yourself, as the MongoDB instance is running through docker as well, and the connection string is
> set from there. That said, there are still some other required values that you will need to set. You can read more
> about how to configure the project in the [Configuration](#configuration) section.
2024-12-30 01:57:22 +00:00
> [!NOTE]
> By default, the docker container will always use a brand new database. If you wish to persist the database across
> runs, you will need to modify the docker-compose file and add a [docker
> volume](https://docs.docker.com/engine/storage/volumes/#use-a-volume-with-docker-compose) or a directory [bind
> mount](https://docs.docker.com/engine/storage/bind-mounts/#use-a-bind-mount-with-compose). By default, MongoDB will
> store the data in `/data/db` directory.
## Ngrok forwarding
If you need to quickly expose the server to the internet for testing purposes, you can use [Ngrok](https://ngrok.com/).
This tool allows you to create a secure tunnel to your localhost, which can be accessed from anywhere on the internet,
from a randomly generated ngrok subdomain. If you're using Arch Linux, you can install it from the AUR (e.g. with `paru
-S ngrok`)
Assuming you're running the server on `localhost:8000`, you can expose it by running:
```bash
ngrok http 8000
```
That said, if you're on the same LAN network, you can also use the local IP address of the machine running the server.
## Configuration
To configure the project, you can either set the environment variables manually or create a `.env` file in the root directory of
2024-12-30 01:57:22 +00:00
the project (the same folder that this README file is in).
Currently, you will need to set 2 environment variables:
2024-12-30 01:57:22 +00:00
- **`JWT_SECRET_TOKEN`:** This is the secret token that will be used to sign the JWT tokens. This should be a long,
random string. I'd recommend generating it with `openssl rand -hex 32` command (if you're on windows, you can use
`openssl` from WSL or generate it through some other means, or use the one in the example later though, although I
wouldn't recommend that for production use).
- **`MONGODB_URI`:** This is the connection string for the MongoDB instance. This should be in the format
`mongodb://username:password@host:port/database`. If you're using the admin authsource, you can also add
`?authSource=admin` suffix to this string. Note that if you're using the docker-compose, you don't need to set this
value, as it's already set in the `docker-compose.yml` file.
There are also some optional environment variables that you can set:
2024-12-30 01:57:22 +00:00
- **`DEBUG`:** This is a boolean value that will enable the debug mode in the FastAPI application. This is useful for
development, but you should never enable this in production. If you don't set this value, it will default to `0`
(false). To enable it, set this to `1` (true).
2024-12-30 01:57:22 +00:00
- **`LOG_FILE`:** If set, also write the logs into given file, otherwise only write to stdout (printing the logs).
- **`TRACE_LEVEL_FILTER`:** Configuration for trace level logging, see: [trace logs config section](#trace-logs-config)
### Example `.env` file
The `.env` file should look something like this:
```dotenv
DEBUG=0
JWT_SECRET_TOKEN=e51ca69e8d42422c7d2f94bc14e9aaaf294bb55a881354633f5e44f2dc9fde71
MONGODB_URI=mongodb://root:test@localhost:27017/my-cool-database?authSource=admin
```
2024-12-30 01:57:22 +00:00
### Trace logs config
We have a custom `trace` log level for the project, which can be used for debugging purposes. This level is below
`debug` and can only be enabled if `DEBUG=1`. This log level is controlled through the `TRACE_LEVEL_FILTER` environment
variable. It works in the following way:
> [!NOTE]
> Due to the time constrains imposed on the project, the logging is sometimes somewhat lacking and there actually aren't
> that many trace level logs. This is something that could be improved in the future, if further development is done.
- If `DEBUG=0`, the `TRACE_LEVEL_FILTER` variable is ignored, regardless of it's value.
- If `TRACE_LEVEL_FILTER` is not set, no trace logs will appear (debug logs only).
- If `TRACE_LEVEL_FILTER` is set to `*`, the root logger will be set to `TRACE` level. All trace logs will appear.
- When `TRACE_LEVEL_FILTER` is set to a list of logger names, delimited by a comma, each of the specified loggers will
be set to `TRACE` level, leaving the rest at `DEBUG` level. For example:
`TRACE_LEVEL_FILTER="src.api.foo.foobar,src.api.bar.barfoo"`
- When `TRACE_LEVEL_FILTER` starts with a `!` symbol, followed by a list of loggers, the root logger will be set to
`TRACE` level, with the specified loggers being set to `DEBUG` level.
2024-12-30 14:34:03 +00:00
### MongoDB
As you probably noticed, the project uses MongoDB as the database. If you're not familiar with MongoDB, it's a NoSQL
database, which is very easy to use and set up. You can find more information about it on the [official MongoDB
website](https://www.mongodb.com/).
To set up a MongoDB instance, you can either use the provided docker-compose file, in which case you don't need to do
anything, or set it up manually. For manual setup, you can follow the [official installation
guide](https://docs.mongodb.com/manual/installation/).
#### Quick MongoDB setup
If you just need a quick MonogDB instance that you can spin up during the development, I'd recommend using Docker. Note
that you don't need to follow the Docker installation for the entire project, e.g. using docker-compose, you can run the
project normally and just host the MongDB instance through Docker. To do this, simply run:
```bash
sudo docker run -d --name mongodb \
-e MONGO_INITDB_ROOT_USERNAME=root \
-e MONGO_INITDB_ROOT_PASSWORD=test \
-p 27017:27017 \
mongo:latest
```
This will start a MongoDB instance with the root user having the username `root` and password `test`, using the admin
`authSource`. The instance will be available on port 27017, so you can use it with the following connection string:
```dotenv
MONGODB_URI=mongodb://root:test@localhost:27017/my-cool-database?authSource=admin
```
Once you're done, you can stop the instance by running:
```bash
sudo docker stop mongodb
```
To also remove the container, you can run:
```bash
sudo docker remove mongodb
```
> [!NOTE]
> This MongoDB instance will not persist the data across runs. If you remove the container, all the data will be lost.
> If you need the data to persist, you can use a docker volume or a bind mount, as already explained in the
> [Docker](#docker) section.
#### Quickly populate the database
During the development, it's often useful to have some data in the database to work with. To quickly populate the
database with some, you can use the provided `populate_db.py` script. To run it, make sure you have activated the
virtual environment and then run:
```bash
python populate_db.py
```
2024-12-30 18:02:48 +00:00
## Documentation
The project includes a Swagger UI documentation, which is automatically generated by FastAPI through the OpenAPI schema.
You can access it on the `/docs` endpoint, e.g. if you're running the project on `localhost:8000`, you can access the
documentation at <http://localhost:8000/docs>.