diff --git a/docs/extensions.md b/docs/extensions.md index 6e8c114d..4b2ec602 100644 --- a/docs/extensions.md +++ b/docs/extensions.md @@ -1,18 +1,55 @@ -# Extensions +# Edgy Extensions -Edgy has extension support build on the Monkay extensions system. Adding extensions -is possible in settings via the attribute/parameter `extensions`. +Edgy's architecture includes built-in support for extensions, leveraging the Monkay extensions system. This allows you to enhance Edgy's functionality and customize its behavior to suit your specific needs. -They must implement the monkay extension protocol or return as a callable a class implementing the extension protocol. -This sounds hard but it isn't: +## Adding Extensions -``` python +You can add extensions to Edgy through the `extensions` attribute or parameter in your Edgy settings. + +Extensions must adhere to the Monkay extension protocol or be a callable that returns a class implementing this protocol. + +This might sound complex, but it's designed to be straightforward. + +```python {!> ../docs_src/extensions/settings.py !} ``` -You can also lazily provide them via add_extension (should happen before the instance is set) +**Explanation:** + +* **`extensions`:** This attribute or parameter in your Edgy settings is used to specify the extensions you want to load. +* **`MyExtension`:** This class represents your custom extension. It must implement the Monkay extension protocol, which defines how extensions interact with Edgy. +* By adding `MyExtension` to the `extensions` list, you're telling Edgy to load and activate this extension. +## Lazy Loading Extensions -``` python +In some cases, you might want to add extensions lazily, after your settings have been defined but before the Edgy instance is fully initialized. You can achieve this using the `add_extension` method. + +```python {!> ../docs_src/extensions/add_extension.py !} ``` + +**Explanation:** + +* **`add_extension`:** This method allows you to add extensions dynamically. +* **`MyExtension`:** This is your custom extension class, which must implement the Monkay extension protocol. +* Calling `add_extension` adds the extension to the list of extensions to be loaded. + +**Important Considerations:** + +* **Timing:** Ensure that you call `add_extension` before the Edgy instance is fully initialized. Adding extensions after initialization might lead to unexpected behavior. +* **Monkay Protocol:** Your extensions must adhere to the Monkay extension protocol. This protocol defines the methods and attributes that your extensions must implement to interact with Edgy. + +## Benefits of Using Extensions + +* **Customization:** Extensions allow you to customize Edgy's behavior to meet your specific requirements. +* **Modularity:** Extensions promote modularity by allowing you to separate concerns and add functionality without modifying Edgy's core code. +* **Reusability:** Extensions can be reused across multiple Edgy projects. + +## Use Cases + +* **Adding custom fields:** You can create extensions that add new field types to Edgy. +* **Implementing custom query logic:** You can create extensions that modify or extend Edgy's query capabilities. +* **Integrating with external services:** You can create extensions that integrate Edgy with external services, such as logging or monitoring tools. +* **Adding custom validation logic:** You can create extensions that add custom validation logic to your Edgy models. + +By leveraging Edgy's extension system, you can tailor Edgy to your specific needs and build powerful, customized applications. diff --git a/docs/file-handling.md b/docs/file-handling.md index bdf92a20..d5737f61 100644 --- a/docs/file-handling.md +++ b/docs/file-handling.md @@ -1,209 +1,148 @@ -# File handling +# File Handling -File handling is notorios difficult topic in ORMs. In edgy we try to streamline it, -so it is easy to understand and secure. +File handling in ORMs is notoriously challenging. Edgy aims to simplify and secure this process. -For this edgy has three security layers: +Edgy implements three security layers: -1. Restricting the image formats parsed (ImageFields). -2. Approved-only open of files (option). -3. Direct size access as field. This way quota handling gets easy. No need to manually track the sizes. - However this configurable. +1. **Restricting Image Formats:** For `ImageFields`, Edgy restricts the image formats that can be parsed, mitigating potential security vulnerabilities. +2. **Approved-Only File Opening:** Edgy provides an option for approved-only file opening, ensuring that potentially dangerous files are not automatically processed. +3. **Direct Size Access:** Edgy offers direct size access as a field, simplifying quota management and eliminating the need for manual size tracking. This feature is configurable. -and to align it with ORMs it uses a staging+non-overwrite by default concept for more safety. +To align with ORM best practices, Edgy employs a staging and non-overwrite concept for enhanced safety. This prevents file name clashes and ensures that files are not overwritten if the save process fails. -This means, there has no worry about file names clashing (one of the main reasons people wanting -access to the model instance during file name creation). -Nor that if a file shall be overwritten and the save process fails the file is still overwritten. +Edgy uses process PID prefixing and thread-safe name reservation, committing file changes only after saving the model instance. This eliminates concerns about file overwrites, regardless of whether you're using processes or threads. -It is realized via process pid prefixing and a thread-safe reserving an available name which is used after for the file creation and -commiting the file changes only after saving the model instance (direct manipulation is still possible via parameters). - -In short: Just use and stop worrying about the files beeing overwritten. No matter if you use processes or threads. - - -There are three relevant File classes: - -- File: Raw IO to a file-like. Baseclass with many helper functions. -- ContentFile: In-memory IO. Transforms bytes on the fly. But can also be used with files. -- FieldFile: Transactional handler for field oeprations. Used in FileField. Not used directly except in subclasses of FileField. +Edgy provides three relevant file classes: +* **File:** A base class for raw I/O operations on file-like objects, offering various helper functions. +* **ContentFile:** An in-memory I/O class that transforms bytes on the fly and can also be used with files. +* **FieldFile:** A transactional handler for field operations, used in `FileField` and its subclasses. ## Configuration -Filehandling is configured via the global settings. See `edgy/conf/global_settings.py` for the options. +File handling is configured through global settings, which can be found in `edgy/conf/global_settings.py`. -## Direct +## Direct Access -Direct access is possible via the storages object in edgy.files. Here you can access the files directly with a storage of your choice. -You get an url or path for accessing the files directly. -This way besides the global configuration nothing is affected. +Direct file access is possible through the `storages` object in `edgy.files`. This allows you to access files directly using a storage of your choice, obtaining URLs or paths for direct file access. This approach minimizes the impact on other parts of your application. -However there is also just limited access to transactional file handling. There is more control by using `save` explicit. +However, direct access provides limited transactional file handling. For more control, use the `save` method explicitly. ## Fields -The recommended way to handle files with database tables are FileFields and ImageFields. Both are quite similar, -in fact ImageFields are a subclass with image related extensions. +`FileFields` and `ImageFields` are the recommended way to handle files within database tables. `ImageFields` are a subclass of `FileFields` with additional image-related extensions. -Fields follow a multi-store concept. It is possible to use in the same FileField multiple stores. The store name is saved along with -the file name, so the right store can be retrieved later. +Edgy fields support a multi-store concept, allowing you to use multiple storage backends within the same field. The storage name is saved alongside the file name, enabling retrieval of the correct storage later. ### FileField -FileField allow to save files next to the database. In contrast to Django, you don't have to worry about the file-handling. -It is all done automatically. - -The cleanups when a file gets unreferenced are done automatically (No old files laying around) but this is configurable too. -Queries are fully integrated, it is possible to use delete, bulk_create, bulk_update without problems. +`FileField` allows you to save files alongside your database records. Edgy handles file operations automatically, including cleanups when files are unreferenced. -Also by default overwriting is not possible. Files even honor a failed save and doesn't overwrite blindly the old ones. +Edgy also prevents overwriting files by default, even in the event of failed save operations. -Setting a file field to None, implicitly deletes the file after saving. +Setting a `FileField` to `None` implicitly deletes the file after saving. -For higher control, the methods of the FieldFile can be used. +For finer control, you can use the methods of the `FieldFile` class. !!! Tip - You may want to set null=True to allow the deletion of the file and having a consistent state afterward. - However you can circumvent the logic by using `delete` with `instant=True` which disables the transactional - file handling and just deletes the file and set the name when `null=True` to None. - In DB the old name will be still referenced, so using `instant=True` is unsafe, except if the object is deleted anyway. - The `instant` parameter is used for the object deletion hook to cleanup. + Set `null=True` to allow file deletion and maintain a consistent state. Use `delete(instant=True)` to bypass transactional file handling and delete files immediately (use with caution). #### Parameters -- `storage`: (Default: `default`) The default storage to use with the field. -- `with_size`: (Default: `True`). Enable the size field. -- `with_metadata`: (Default: `True`). Enable the metadata field. -- `with_approval`: (Default: `False`). Enable the approval logic. -- `extract_mime`: (Default: `True`). Save the mime in the metadata field. You can set "approved_only" to only do this for approved files. -- `mime_use_magic`: (Default: `False`). Use the `python-magic` library to get the mime type -- `field_file_class`: Provide a custom FieldFile class. -- `generate_name_fn`: fn(instance (if available), name, file, is_direct_name). Customize the name generation. -- `multi_process_safe`: (Default: `True`). Prefix name with the current process id by default. +* `storage`: (Default: `default`) The default storage to use. +* `with_size`: (Default: `True`) Enable the size field. +* `with_metadata`: (Default: `True`) Enable the metadata field. +* `with_approval`: (Default: `False`) Enable approval logic. +* `extract_mime`: (Default: `True`) Save MIME type in metadata. Set to `"approved_only"` to do this only for approved files. +* `mime_use_magic`: (Default: `False`) Use the `python-magic` library to get the MIME type. +* `field_file_class`: Provide a custom `FieldFile` class. +* `generate_name_fn`: Customize name generation. +* `multi_process_safe`: (Default: `True`) Prefix name with process ID. !!! Tip - If you don't want the process pid prefix you can disable this feature with `multi_process_safe=False`. + Disable process PID prefixing with `multi_process_safe=False`. !!! Note - The process pid prefixing has a small limitation: all processes must be in the same process namespace (e.g. docker). - If two processes share the same pid and are alive, the logic doesn't work but because of the random part a collision will be still unlikely. - You may want to add an unique container identifier or ip address via the generate_name_fn parameter to the path. + Process PID prefixing requires all processes to be in the same process namespace. Use `generate_name_fn` to add unique identifiers. #### FieldFile -Internally the changes are tracked via the FieldFile pseudo descriptor. It provides some useful interface parts of a file-like (at least -so much, that pillow open is supported). - -You can manipulate the file via setting a file-like object or None or for better control, there are three methods: +`FieldFile` tracks changes and provides a file-like interface. -* save(content, *, name="", delete_old=True, multi_process_safe=None, approved=None, storage=None, overwrite=None): -* delete(*, instant, approved): Stage file deletion. When setting instant, the file is deleted without staging. -* set_approved(bool): Set the approved flag. Saved in db with `with_approval` +You can manipulate files by setting a file-like object or `None`, or use the following methods: -`content` is the most important parameter. It supports File-Like objects in bytes mode as well as bytes directly as well as File instances. - -In contrast to Django the conversion is done automatically. +* `save(content, *, name="", delete_old=True, multi_process_safe=None, approved=None, storage=None, overwrite=None)`: Save a file. +* `delete(*, instant, approved)`: Stage file deletion. +* `set_approved(bool)`: Set the approved flag. +`content` supports file-like objects, bytes, and `File` instances. !!! Tip - You can overwrite a file by providing overwrite=True to save and pass the old file name. - There is no extra prefix added from `multi_process_safe` by default (except you set the parameter explicitly `True`), so the overwrite works. + Overwrite files with `overwrite=True` and the old file name. !!! Tip - You can set the approved flag while saving deleting by providing the approved flag. + Set the approved flag during save or delete. !!! Tip - If you want for whatever reason `multi_process_safe` and `overwrite` together, you have to specify both parameters explicitly. + Use `multi_process_safe` and `overwrite` together by specifying both parameters explicitly. #### Metadata -The default metadata of a FileField consists of - -- `mime` +`FileField` metadata includes: -Additionally if `with_size` is True you can query the size of the file in db via the size field. -It is automatically added with the name `_size`. +* `mime` +If `with_size` is `True`, the file size is available in the database as `_size`. ### ImageField -Extended FileField for image handling. Because some image formats are notorious unsafe you can limit the loaded formats. - +`ImageField` extends `FileField` for image handling, allowing you to restrict loaded image formats. -#### Extra-Parameters +#### Extra Parameters -- `with_approval`: (Default: `True`). Enable the approval logic. Enabled by default in ImageFields. -- `image_formats`: (Default: []). Pillow formats to allow loading when the file is not approved. Set to None to allow all. -- `approved_image_formats`: (Default: None). Extra pillow formats to allow loading when the file is approved. Set to None to allow all (Default). +* `with_approval`: (Default: `True`) Enable approval logic. +* `image_formats`: (Default: `[]`) Allowed Pillow formats for non-approved files. +* `approved_image_formats`: (Default: `None`) Allowed Pillow formats for approved files. #### ImageFieldFile -This is a subclass from FieldFile with an additional method +`ImageFieldFile` is a subclass of `FieldFile` with an additional method: -`open_image` - -which opens the file as a PIL ImageFile. +* `open_image`: Opens the file as a PIL `ImageFile`. !!! Note - `open_image` honors the format restrictions by the ImageField. - + `open_image` honors the format restrictions specified by `ImageField`. #### Metadata -The default metadata of a ImageField consists of +`ImageField` metadata includes: -- `mime` -- `height` (if the image could be loaded (needs maybe approval)) -- `width` (if the image could be loaded (needs maybe approval)) +* `mime` +* `height` +* `width` -Also the size is available like in FileField in a seperate field (if enabled). +The file size is also available, similar to `FileField`. ## Concepts ### Quota -When storing user data, a quota calculation is important to prevent a malicous use as well as billing -the users correctly. - -A naive implementation would iterate through the objects and add all sizes, so a storage usage can be determined. - -This is inperformant! We have the size field. - -Instead of iterating through the objects, we just sum up the sizes in db per table via the sum operator - +Use the size field to calculate storage usage efficiently. ```python {!> ../docs_src/fields/files/file_with_size.py !} ``` - - ### Metadata -Because metadata of files are highly domain specific a JSONField is used to hold the attributes. By default -`mime` is set and in case of ImageField `height` and `width`. This field is writable, so it can be extended via automatically set metadata. -However when saving a file or the model without exclusions it is overwritten. - -The recommend way of extending it is via subclassing the fields (actually the factories) and providing a custom extract_metadata. - -The column name of the metadata field differs from the field name because of size reasons (long names can lead to violating -column name length limits). - -It is available as column `_mdata`. - - -### Approval concept +Metadata is stored in a `JSONField`. Extend metadata by subclassing fields and providing a custom `extract_metadata` method. -FileFields and ImageField have a parameter `with_approval`. This parameter enables a per file approval. -Non-approved files cannot be opened and only a limited set of attributes is extracted (e.g. mime). +The metadata column is named `_mdata`. -This ensures dangerous files are not opened automatically but first checked by a moderator or admin before they are processed. -For usability `ImageField` allows to specify image formats which are processed regardless if the file was approved. -By default list of these formats is empty (behavior is off). +### Approval Concept -Third party applications can scan for the field: +The `with_approval` parameter enables per-file approval. Non-approved files have limited attribute extraction. -`_approved` or the column with name `_ok` +`ImageField` allows specifying image formats that are processed regardless of approval. -to detect if a file was approved. +Third-party applications can check the `_approved` field or `_ok` column to determine file approval status. diff --git a/docs/inspectdb.md b/docs/inspectdb.md index 87b96460..bc7f7699 100644 --- a/docs/inspectdb.md +++ b/docs/inspectdb.md @@ -1,61 +1,59 @@ -# Inspect DB +# Inspecting Existing Databases -Does it happen often changing ORMs during a project? Well, not that often really but it might -happen and usually during the discovery time where the best stack is being *figured*. +It's not uncommon for projects to switch ORMs, especially during the initial discovery phase when the ideal technology stack is being determined. While the SQL database often remains constant, the ORM used to interact with it may change. -Well, something that usually remains is the SQL database and what it changes is normally the ORM -that operates on the top of it. - -The `inspectdb` is another client management tool that allows you to read from an existing database -all the tables and generates [ReflectModel](./reflection/reflection.md) objects for your. - -In other words, it maps existing database tables into an **Edgy like syntax** to make your life -easier to manage. +Edgy's `inspectdb` command-line tool addresses this scenario by allowing you to read an existing database's schema and generate [Reflected Models](./reflection/reflection.md). This effectively maps your database tables into an **Edgy-compatible syntax**, simplifying the process of working with pre-existing databases. !!! Tip - If you are not familiar with [ReflectModel](./reflection/reflection.md), now is a good time to catch-up. - -## Reflect models + If you're unfamiliar with [Reflected Models](./reflection/reflection.md), taking a moment to review that section will provide valuable context. -These are the models automatically generated by **Edgy** when the `inspectdb` is triggered. +## Reflected Models: A Bridge to Existing Databases -The reason for the [ReflectModel](./reflection/reflection.md) it is simply because those are not managed by -the [migration system](./migrations/migrations.md) **but you still operate as a normal [Edgy model](./models.md)**. +When `inspectdb` is executed, it generates [Reflected Models](./reflection/reflection.md). These models differ from standard Edgy models in that they are **not managed by Edgy's migration system](./migrations/migrations.md). However, they function identically to regular [Edgy models](./models.md) in terms of data access and manipulation. -In other words, it is a *safety measure* of **Edgy**. +This distinction serves as a **safety measure**. By excluding reflected models from Edgy's migration system, Edgy prevents accidental modifications to your existing database schema. -## How does it work +## How `inspectdb` Works -Now it is time for the good stuff right? Well, it is actually very simple. +Using `inspectdb` is straightforward. You can generate reflected models using a database URL. -* Via [database url](#database-url). +### Database URL -### Database url - -This is the easiest and probably the one way you will be using all the time and syntax is as simple as this: +This is the most common and convenient method. The syntax is as follows: ```shell edgy inspectdb --database > .py ``` -**Example** +**Example:** ```shell edgy inspectdb --database "postgres+asyncpg://user:password@localhost:5432/my_db" > models.py ``` -And that is it! This simple. The `inspectdb` will write the models inside the specified file and -from there you can use them anywhere. +This command will generate Edgy reflected models based on the specified database and write them to the `models.py` file. You can then use these models within your Edgy application. -#### Parameters +#### `inspectdb` Parameters -To check the available parameters for the `inspectdb`: +To explore the available parameters for `inspectdb`, use the `--help` flag: ```shell edgy inspectdb --help ``` -* **schema** - The name of the schema to connect. For example, in `MSSQL` the `dbo` is usually used. -This will be probably used on rare occasions by it is available just in case you need. -* **database** - The fully qualified connection string to the database. Example: -`postgres+asyncpg://user:password@localhost:5432/my_db`. +* **`--schema SCHEMA`:** Specifies the schema to connect to. For example, in MSSQL, `dbo` is commonly used. This parameter is typically used in specific database environments. +* **`--database CONNECTION_STRING`:** Provides the fully qualified connection string to the database. Example: `postgres+asyncpg://user:password@localhost:5432/my_db`. + +## Practical Use Cases + +* **Migrating to Edgy:** If you're transitioning an existing database-driven application to Edgy, `inspectdb` allows you to quickly generate models for your existing tables. +* **Working with Legacy Databases:** When interacting with legacy databases that weren't initially designed for Edgy, `inspectdb` enables you to seamlessly integrate them. +* **Rapid Prototyping:** During development, `inspectdb` can be used to quickly generate models for existing database schemas, accelerating the prototyping process. + +## Key Considerations + +* **Data Types:** `inspectdb` attempts to map database data types to the closest Edgy field types. However, manual adjustments may be necessary in some cases. +* **Relationships:** `inspectdb` can infer foreign key relationships between tables. However, complex or non-standard relationships may require manual configuration. +* **Migrations:** Remember that reflected models are not managed by Edgy's migration system. Any schema changes must be made directly to the database. + +By using `inspectdb`, you can efficiently bridge the gap between existing databases and Edgy, facilitating seamless integration and simplifying the process of working with pre-existing data. diff --git a/docs/managers.md b/docs/managers.md index 27b664b8..3b25d034 100644 --- a/docs/managers.md +++ b/docs/managers.md @@ -1,68 +1,75 @@ # Managers -The managers are a great tool that **Edgy** offers. Heavily inspired by Django, the managers -allow you to build unique tailored queries ready to be used by your models. -Unlike in Django Managers are instance and class aware. -For every inheritance they are shallow copied and if used on an instance you have also an shallow copy you can customize. +Managers are a powerful feature in **Edgy**, heavily inspired by Django's manager system. They allow you to create tailored, reusable query sets for your models. Unlike Django, Edgy managers are instance and class aware. For every inheritance, they are shallow copied, and if used on an instance, you also have a shallow copy that you can customize. -Note: shallow copy means, deeply nested attributes or mutable attributes must be copied and not modified. As an alternative `__copy__` can be overwritten to do this for you. +**Note:** Shallow copy means that deeply nested or mutable attributes must be copied, not modified. Alternatively, you can override `__copy__` to handle this for you. -**Edgy** by default uses the the manager called `query` for direct queries which it makes it simple to understand. -For related queries `query_related` is used which is by default a **RedirectManager** which redirects to `query`. +**Edgy** uses the `query` manager by default for direct queries, which simplifies understanding. For related queries, `query_related` is used, which is a **RedirectManager** by default that redirects to `query`. -Let us see an example. +Let's look at a simple example: ```python hl_lines="23 25" {!> ../docs_src/models/managers/simple.py !} ``` -When querying the `User` table, the `query` (manager) is the default and **should** be always -presented when doing it so. +When querying the `User` table, the `query` manager is the default and **should** always be present. ## Inheritance -Managers can set the inherit flag to False, to prevent being used by subclasses. Things work like for fields. -An usage would be injected managers though we have non yet. +Managers can set the `inherit` flag to `False` to prevent subclasses from using them. This is similar to how fields work. This is useful for injected managers, though we don't have any yet. -## Custom manager +## Custom Managers -It is also possible to have your own custom managers and to do it so, you **should inherit** -the **Manager** class and override the `get_queryset()`. For further customization it is possible to -use the **BaseManager** class which is more extensible. +You can create your own custom managers by **inheriting** from the **Manager** class and overriding the `get_queryset()` method. For more extensive customization, you can use the **BaseManager** class, which is more extensible. For those familiar with Django managers, the concept is exactly the same. 😀 -**The managers must be type annotated ClassVar** or an `ImproperlyConfigured` exception will be raised. +**Managers must be type annotated as ClassVar**, or an `ImproperlyConfigured` exception will be raised. ```python hl_lines="19" {!> ../docs_src/models/managers/example.py !} ``` -Let us now create new manager and use it with our previous example. +Let's create a new manager and use it with our previous example: ```python hl_lines="26 42 45 48 55" {!> ../docs_src/models/managers/custom.py !} ``` -These managers can be as complex as you like with as many filters as you desire. What you need is -simply override the `get_queryset()` and add it to your models. +These managers can be as complex as you like, with as many filters as you need. Simply override `get_queryset()` and add the manager to your models. -## Override the default manager +## Overriding the Default Manager -Overriding the default manager is also possible by creating the custom manager and overriding -the `query` manager. By default the `query`is also used for related queries. This can be customized via setting -an explicit `query_related` manager. +You can override the default manager by creating a custom manager and overriding the `query` manager. By default, `query` is also used for related queries. This can be customized by setting an explicit `query_related` manager. ```python hl_lines="26 39 42 45 48" {!> ../docs_src/models/managers/override.py !} ``` -Now with only overwriting the related manager: +Now, let's override only the related manager: ```python hl_lines="26 39 42 45 48" {!> ../docs_src/models/managers/override_related.py !} ``` !!! Warning - Be careful when overriding the default manager as you might not get all the results from the - `.all()` if you don't filter properly. + Be careful when overriding the default manager, as you might not get all the results from `.all()` if you don't filter properly. + +## Key Concepts and Benefits + +* **Reusability:** Managers allow you to encapsulate complex query logic and reuse it across your application. +* **Organization:** They help keep your model definitions clean and organized by moving query logic out of the model class. +* **Customization:** You can create managers that are tailored to the specific needs of your models. +* **Instance and Class Awareness:** Edgy managers are aware of the instance and class they are associated with, allowing for more dynamic and context-aware queries. +* **Inheritance Control:** The `inherit` flag allows you to control whether managers are inherited by subclasses. +* **Separation of Concerns:** Managers allow you to separate query logic from model definitions, leading to cleaner and more maintainable code. + +## Use Cases + +* **Filtering by Status:** Create a manager that only returns active records. +* **Ordering by Specific Fields:** Create a manager that returns records ordered by a specific field or set of fields. +* **Aggregations:** Create a manager that performs aggregations on your data, such as calculating averages or sums. +* **Complex Joins:** Create a manager that performs complex joins between multiple tables. +* **Custom Query Logic:** Create a manager that implements custom query logic that is specific to your application. + +By using managers effectively, you can create more powerful and maintainable Edgy applications. diff --git a/docs/marshalls.md b/docs/marshalls.md index aa9063bb..41031b42 100644 --- a/docs/marshalls.md +++ b/docs/marshalls.md @@ -1,49 +1,34 @@ -# Marshalls +# Marshalls in Edgy -Imagine you need to serialize you data and adding some extra flavours on top of it. Now, imagine -that [Edgy models](./models.md) contain information that could be used but its not accessible -directly upon the moment of serialization. +Marshalls in Edgy provide a powerful mechanism for serializing data and adding extra layers of customization. They allow you to augment Edgy models with additional information that might not be directly accessible during serialization. -Here is where the `marshalls` come into play. +Essentially, marshalls facilitate adding validations on top of existing models and customizing the serialization process, including restricting which fields are serialized. While not primarily designed for direct database interaction, marshalls offer an interface to perform such operations if needed, through the `save()` method. -The `marshalls` will simply help you adding those extra validations on the top of your existing -model and add those same extras in the serialization process or even restrict the fields being -serialized, for instance, you might not want to show all the fields. +## Marshall Class -A `marshall` is not designed to interact 100% with the database operations since that is done -by the Edgy model but it provides an interface that can also do that in case you want, the -[save method](#save). - -## Marshall - -This is the main class that **must** be subclassed when creating a Marshall. There is where -you declare all the extra fields and/or fields you want to serialize. +The `Marshall` class is the base class that **must** be subclassed when creating a marshall. It's where you define extra fields and specify which fields to serialize. ```python from edgy.core.marshalls import Marshall ``` -When declaring the `Marshall` you **must** declare a [ConfigMarshall](#configmarshall) and then -all the extras you might want to add. +When declaring a `Marshall`, you **must** define a [ConfigMarshall](#configmarshall) and then add any extra fields you want. -In a nutshell, this is how you can use a Marshall. +Here's a basic example of how to use a marshall: ```python {!> ../docs_src/marshalls/nutshell.py !} ``` -Ok, there is a lot to unwrap here but let us go step by step. +Let's break this down step by step. -The `Marshall` has a `marshall_config` that **must be declared** specifying the `model` and `fields`. +The `Marshall` has a `marshall_config` attribute that **must be declared**, specifying the `model` and `fields`. -The `fields` is a list of the **available fields** of the [model](./models.md) and it serves to specifically -specify which ones should the marshall serialize directly from the model. +The `fields` list contains the names of the [model](./models.md) fields that should be serialized directly from the model. -Then, the `extra` and `details` are marshall `fields`, that means, the fields that are not model fields -directly but must be serialized with the extra bit of information. You can check more details about -the [Fields](#fields) later on. +The `extra` and `details` fields are marshall-specific fields, meaning they are not directly from the model but are included in the serialization. You can find more details about these [Fields](#fields) later in this document. -When the marshall is fully declared, you can simply do this: +Once the marshall is defined, you can use it like this: ```python data = {"name": "Edgy", "email": "edgy@example.com"} @@ -51,7 +36,7 @@ marshall = UserMarshall(**data) marshall.model_dump() ``` -And the result will be: +The result will be: ```json { @@ -62,25 +47,20 @@ And the result will be: } ``` -As you can see, the `Marshall` is also a Pydantic model so you can take the full potential of it. +As you can see, `Marshall` is also a Pydantic model, allowing you to leverage its full potential. -There are more operations and things you can do with marshalls regarding the [fields](#fields) that -you can read in the next sections. +There are more operations and customizations you can perform with marshalls, particularly regarding [fields](#fields), which are covered in the following sections. ## ConfigMarshall -To operate with the marshalls you will need to declare the `marshall_config` which is simply a -typed dictionary containing the following keys: +To work with marshalls, you need to declare a `marshall_config`, which is a typed dictionary containing the following keys: -* **model** - The Edgy [model](./models.md) associated with the Marshall or a string `dotted.path` -pointing to the model. -* **fields** - A list of strings of the fields you want to include by default in the serialization -of the marshall. -* **exclude** - A list of strings containing the name of the fields you **don't want to** have serialized. +* **model:** The Edgy [model](./models.md) associated with the marshall, or a string `dotted.path` pointing to the model. +* **fields:** A list of strings representing the fields to include in the marshall's serialization. +* **exclude:** A list of strings representing the fields to exclude from the marshall's serialization. !!! warning - There is a caveat though, **you can only declare `fields` or `exclude` but not both** and the `model` - is mandatory or else an exception is raised. + **You can only declare either `fields` or `exclude`, but not both.** The `model` is mandatory, or an exception will be raised. === "fields" @@ -94,10 +74,9 @@ of the marshall. {!> ../docs_src/marshalls/exclude.py !} ``` -The `fields` also allow the use of `__all__`. This means that you want all the fields declared in -your Edgy model. +The `fields` list also supports the use of `__all__`, which includes all fields declared in your Edgy model. -**Example** +**Example:** ```python class CustomMarshall(Marshall): @@ -106,24 +85,20 @@ class CustomMarshall(Marshall): ## Fields -Here is where the things get interesting. When declaring a `Marshall` and want to add extra fields -to the serialization, you can do it by declaring two types of fields. +This is where things get interesting. When declaring a `Marshall` and adding extra fields to the serialization, you can use two types of fields: -* [MarshallField](#marshallfield) - Used the point to a `model` field, a python `property` that is also declared -inside the Edgy model or a function. -* [MarshallMethodField](#marshallmethodfield) - Used to point to a function that is declared **inside the marshall** -and **not inside the model**. +* [MarshallField](#marshallfield): Used to reference a model field, a Python `property` defined in the Edgy model, or a function. +* [MarshallMethodField](#marshallmethodfield): Used to reference a function defined **within the marshall**, not the model. -To use the fields, you can simply import it. +To use these fields, import them: ```python from edgy.core.marshalls import fields ``` -All the fields have the **mandatory** attribute `field_type`. This is used to declare which type -of field should be used for automatic validation of Pydantic. +All fields have a **mandatory** attribute `field_type`, which specifies the Python type used by Pydantic for validation. -**Example** +**Example:** ```python class CustomMarshall(Marshall): @@ -134,17 +109,16 @@ class CustomMarshall(Marshall): ### MarshallField -This is the most common field you can declare in your marshall. +This is the most common field type used in marshalls. #### Parameters -* **field_type** - The Python type that is used by Pydantic to validate the data. -* **source** - The source of the field to be gathered from the model. It can be directly the model -field, a property or a function. +* **field_type:** The Python type used by Pydantic for data validation. +* **source:** The source of the field, which can be a model field, a property, or a function. -**All of the values passed in the source must come from the Edgy Model**. +**All values passed in the source must come from the Edgy Model.** -**Example** +**Example:** ```python {!> ../docs_src/marshalls/source.py !} @@ -152,31 +126,27 @@ field, a property or a function. ### MarshallMethodField -This function is used to get extra information that is provided by the `Marshall` itself. +This field type is used to retrieve extra information provided by the `Marshall` itself. -When declaring a `MarshallMethodField` you must have the function `get_` with the corresponding -name of the field used by the `MarshallMethodField`. +When declaring a `MarshallMethodField`, you must define a function named `get_` followed by the field name. -When declaring the function, Edgy will automatically inject an object (instance) of the Edgy model -declared in the `marshall_config`. This instance **is not persisted in the database** unless you -specifically [save it](#save), which means, the `primary_key` will not be available until then but -the remaining object, functions, attributes and operations, are. +Edgy automatically injects an instance of the Edgy model declared in `marshall_config` into this function. This instance **is not persisted in the database** unless you explicitly [save it](#save). Therefore, the `primary_key` will not be available until then, but other object attributes and operations are. #### Parameters -* **field_type** - The Python type that is used by Pydantic to validate the data. +* **field_type:** The Python type used by Pydantic for data validation. -**Example** +**Example:** ```python {!> ../docs_src/marshalls/method_field.py !} ``` -## Including additional context +## Including Additional Context -In certain scenarios, it is necessary to provide additional context to the marshall. Additional context can be provided by passing a context argument when instantiating the marshall. +In some cases, you might need to provide extra context to a marshall. You can do this by passing a `context` argument when instantiating the marshall. -**Example** +**Example:** ```python class UserMarshall(Marshall): @@ -192,7 +162,7 @@ marshall = UserMarshall(**data, context={"foo": "bar"}) marshall.model_dump() ``` -And the result will be: +Result: ```json { @@ -202,27 +172,21 @@ And the result will be: } ``` -## `save()` - -Since the [Marshall](#marshall) is also a Pydantic base model, the same as Edgy, there may be some -times where you would like to persist the data directly using the marshall instead of using complicated -processes to make it happen. +## `save()` Method -This is also possible as Edgy made it simple for you. In the same way an Edgy model has the `save()` -so does the `marshall`. In reality, what Edgy is doing is performing that same Edgy `save()` operation -for you. +Since `Marshall` is a Pydantic base model, similar to Edgy models, you can persist data directly using the marshall. -How does it work? In the same way it would work for a normal Edgy model. +Edgy provides a `save()` method for marshalls that mirrors the `save()` method of Edgy models. ### Example -Let us assume the following example. +Using the `UserMarshall` from the previous example: ```python {!> ../docs_src/marshalls/method_field.py !} ``` -Now, to create and save an instance of the model `User`, we simply need to: +To create and save a `User` instance: ```python data = { @@ -235,24 +199,18 @@ marshall = UserMarshall(**data) await marshall.save() ``` -The marshall is smart enough to understand what fields belong to the model and what fields are -custom and specific to the marshall and persists it. +The marshall intelligently distinguishes between model fields and marshall-specific fields and persists the model fields. -## Extra considerations +## Extra Considerations -Creating a `marshall` its easy and very intuitive but there are some considerations you **must have**. +Creating marshalls is straightforward, but keep these points in mind: -#### Model fields with `null=False` +#### Model Fields with `null=False` -When declaring the [ConfigMarshall](#configmarshall) `fields`, you -**must select at least the mandatory fields necessary, `null=False`, or a `MarshallFieldDefinitionError` -will be raised. +When declaring `ConfigMarshall` `fields`, you **must select at least the mandatory fields (`null=False`)**, or a `MarshallFieldDefinitionError` will be raised. -This is used to prevent any unnecessary errors from happening when the creation of the model -occurs. +This prevents errors during model creation. -#### Model validators +#### Model Validators -This remains exactly was it was before, meaning, if you want to validate the fields of the model -when creating an instance (persisted or not), that can and should be done using the normal -Pydantic `@model_validator` and `@field_validator`. +Model validators (using `@model_validator` and `@field_validator`) work as expected. You can use them to validate model fields during instance creation. diff --git a/docs/reference-foreignkey.md b/docs/reference-foreignkey.md index 57765789..80865642 100644 --- a/docs/reference-foreignkey.md +++ b/docs/reference-foreignkey.md @@ -1,43 +1,31 @@ -# Reference ForeignKey +# Reference ForeignKey (RefForeignKey) -This is so special and unique to **Edgy** and rarely seen (if ever) that deserves its own page in -the documentation! +The `Reference ForeignKey` (RefForeignKey) is a unique feature in **Edgy** that simplifies the creation of related records. -## What is a Reference ForeignKey +## What is a Reference ForeignKey? -Well for start it is not a normal [ForeignKey](./fields/index.md#foreignkey). The reason why calling -**RefForeignKey** it is because of its own unique type of functionality and what it can provide -when it comes to **insert** records in the database. - -This object **does not create** any foreign key in the database for you, mostly because this type -literally does not exist. Instead is some sort of a mapper that can coexist inside your [model][models] -declaration and help you with some automated tasks. +Unlike a standard [ForeignKey](./fields/index.md#foreignkey), a `RefForeignKey` does **not** create a foreign key constraint in the database. Instead, it acts as a mapper that facilitates automated record insertion. !!! Warning - The [RefForeignKey][reffk] its only used for insertion of records and not for updates. - Be very careful not to create duplicates and make those normal mistakes. - -As mentioned above, `RefForeignKey` will **always create** (even on `save()`) records, it won't -update if they exist. + `RefForeignKey` is **only** used for inserting records, not updating them. Exercise caution to avoid creating duplicates. -## Brief explanation +`RefForeignKey` **always creates** new records, even on `save()`, rather than updating existing ones. -In a nutshell, to use the [RefForeignKey][reffk] you will need to use a [ModelRef][model_ref]. +## Brief Explanation -The [ModelRef][model_ref] is a special Edgy object that will make sure you can interact with the -model declared and perform the operations. +To use `RefForeignKey`, you'll need a [ModelRef](#modelref). -Now, what is this useful? Let us imagine the following scenario: +[ModelRef](#modelref) is an Edgy object that enables interaction with the declared model and performs operations. -### Scenario example +**Scenario Example** -You want to create a blog or anything that has `users` and `posts`. Something like this: +Consider a blog with `users` and `posts`: ```python {!> ../docs_src/reffk/example1.py !} ``` -Quite simple so far. Now the normal way of creating `users` and `posts` would be like this: +Typically, you'd create `users` and `posts` like this: ```python # Create the user @@ -49,13 +37,13 @@ await Post.query.create(user=user, comment="Another comment") await Post.query.create(user=user, comment="A third comment") ``` -Simple, right? What if there was another way of doing this? This is where the [RefForeignKey][reffk] gets in. +`RefForeignKey` offers an alternative approach. ## RefForeignKey -A RefForeignKey is internally interpreted as a **list of the model declared in the [ModelRef][model_ref]**. +`RefForeignKey` is internally treated as a **list of the model declared in [ModelRef](#modelref)**. -How to import it: +Import it: ```python from edgy import RefForeignKey @@ -67,22 +55,19 @@ Or from edgy.core.db.fields import RefForeignKey ``` -When using the `RefForeignKey` it make it **mandatory** to populate the `to` with a `ModelRef` type -of object or it will raise a `ModelReferenceError`. +`RefForeignKey` requires the `to` parameter to be a `ModelRef` object; otherwise, it raises a `ModelReferenceError`. ### Parameters -* **to** - To which [ModelRef][model_ref] it should point. -* **null** - If the RefForeignKey should allow nulls when an instance of your model is created. +* **to:** The [ModelRef](#modelref) to point to. +* **null:** Whether to allow nulls when creating a model instance. !!! Warning - This is for when an instance is created, **not saved**, which means it will run the normal - Pydantic validations upon the creation of the object. + This applies during instance creation, not saving. It performs Pydantic validations. -### ModelRef +## ModelRef -This is another special type of object unique to **Edgy**. It is what allows you to interact with -the [RefForeignKey][reffk] and use it properly. +`ModelRef` is a special Edgy object for interacting with `RefForeignKey`. ```python from edgy import ModelRef @@ -94,125 +79,90 @@ Or from edgy.core.db.models import ModelRef ``` -The `ModelRef` when creating and declaring it makes it **mandatory** to populate the `__related_name__` -attribute or else it won't know what to do and it will raise a `ModelReferenceError`. This is good and -means you can't miss it even if you wanted to. - -The `__related_name__` attribute should point to a Relation (reverse side of ForeignKey or ManyToMany relation). - -The `ModelRef` is a special type from the Pydantic `BaseModel` which means you can take advantage -of everything that Pydantic can do for you, for example the `field_validator` or `model_validator` -or anything you could normally use with a normal Pydantic model. +`ModelRef` requires the `__related_name__` attribute to be populated; otherwise, it raises a `ModelReferenceError`. -#### Attention +`__related_name__` should point to a Relation (reverse side of ForeignKey or ManyToMany relation). -You need to be careful when declaring the fields of the `ModelRef` because that will be used -against the `__related_name__` declared. If the [model][models] on the reverse end of the relationship has `constraints`, `uniques` and so on -you will need to respect it when you are about to insert in the database. +`ModelRef` is a Pydantic `BaseModel`, allowing you to use Pydantic features like `field_validator` and `model_validator`. -It is also not possible to cross multiple models (except the through model in ManyToMany). +### Attention -#### Declaring a ModelRef +When declaring `ModelRef` fields, ensure they align with the `__related_name__` model's constraints and uniques. -When creating a `ModelRef`, as mentioned before, you need to declare the `__related_name__` field pointing -to the Relation you want that reference to be. +You cannot cross multiple models (except the through model in ManyToMany). -Let us be honest, would just creating the `__related_name__` be enough for what we want to achieve? No. +### Declaring a ModelRef -In the `ModelRef` you **must** also specify the fields you want to have upon the instantiation of -that model. +Declare the `__related_name__` field and specify the fields for instantiation. -Let us see an example how to declare the [ModelRef][model_ref] for a specific [model][models]. +**Example:** ```python title="The original model" {!> ../docs_src/reffk/model_ref/how_to_declare.py !} ``` -First we have a model already created which is the database table representation as per normal design, -then we can create a model reference for that same [model][models]. +Create a model reference: ```python title="The model reference" hl_lines="9-10" {!> ../docs_src/reffk/model_ref/model_ref.py !} ``` -Or if you want to have everything in one place. +Or: ```python title="The model reference" hl_lines="19-20" {!> ../docs_src/reffk/model_ref/model_ref2.py !} ``` -Another way of thinking *what fields should I put in the ModelRef* is: +Include at least the non-null fields of the referenced model. -> What minimum fields would I need to create a object of type X using the ModelRef? +## How to Use -This usually means, **you should put at least the not null fields** of the model you are referencing. +Combine `RefForeignKey` and `ModelRef` in your models. -## How to use - -Well, now that we talked about the [RefForeignKey][reffk] and the [ModelRef][model_ref], it is time -to see exactly how to use both in your models and to take advantage. - -Do you remember the [scenario](#scenario-example) above? If not, no worries, let us see it again. +**Scenario Example (Revisited)** ```python {!> ../docs_src/reffk/example1.py !} ``` -In the [scenario](#scenario-example) above we also showed how to insert and associate the posts with -the user but now it is time to use the [RefForeignKey][reffk] instead. - -**What do we needed**: - -1. The [ModelRef][model_ref] object. -2. The [RefForeignKey][reffk] field (Optionally, you can pass ModelRef instances also as positional argument). - -Now it is time to readapt the [scenario](#scenario-example) example to adopt the [RefForeignKey](#refforeignkey) -instead. - -### In a nutshell +Use `RefForeignKey` instead: +### In a Nutshell ```python hl_lines="10-12 18" {!> ../docs_src/reffk/nutshell.py !} ``` -That is it, you simply declare the [ModelRef][model_ref] created for the `Post` model and pass it -to the `posts` of the `User` model inside the [RefForeignKey][reffk]. In our example, the `posts` -is **not null**. +Declare the `ModelRef` for the `Post` model and pass it to the `posts` field of the `User` model. !!! Note - As mentioned before, the [RefForeignKey](#refforeignkey) **does not create** a field in the - database. This is for internal Edgy model purposes only. + `RefForeignKey` does **not** create a database field. It's for internal Edgy model purposes. -### More structured +### More Structured -The previous example has everything in one place but 99% of times you will want to have the references -somewhere else and just import them. A dedicated `references.py` file for instance. - -With this idea in mind, now it kinda makes a bit more sense doesn't it? Something like this: +Separate references into a `references.py` file: ```python hl_lines="5" title="references.py" {!> ../docs_src/reffk/references.py !} ``` -And the models with the imports. +Models with imports: ```python hl_lines="6 15" title="models.py" {!> ../docs_src/reffk/complex_example.py !} ``` -Here an example using the ModelRefs without RefForeignKey: +Using ModelRefs without RefForeignKey: ```python title="models.py" {!> ../docs_src/reffk/positional_example.py !} ``` -### Writing the results +### Writing Results -Now that we refactored the code to have the [ModelRef][model_ref] we will also readapt the way we -insert in the database from the [scenario](#scenario-example). +Adapt the insertion method from the scenario: -**Old way** +**Old Way:** ```python # Create the user @@ -224,7 +174,7 @@ await Post.query.create(user=user, comment="Another comment") await Post.query.create(user=user, comment="A third comment") ``` -**Using the ModelRef** +**Using ModelRef:** ```python # Create the posts using PostRef model @@ -236,55 +186,27 @@ post3 = PostRef(comment="A third comment") await User.query.create(name="Edgy", posts=[post1, post2, post3]) # or positional (Note: because posts has not null=True, we need still to provide the argument) await User.query.create(post1, post2, post3, name="Edgy", posts=[]) - ``` -This will now will make sure that creates all the proper objects and associated IDs in the corresponding -order, first the `user` followed by the `post` and associates that user with the created `post` -automatically. - -Ok, this is great and practical sure but coding wise, it is also very similar to the original way, -right? Yes and no. - -What if we were to apply the [ModelRef][model_ref] and the [RefForeignKey][reffk] in a proper API -call? Now, that would be interesting to see wouldn't it? +This ensures proper object creation and association. ## Using in API -As per almost everything in the documentation, **Edgy** will use [Esmerald][esmerald] as an example. -Let us see the advantage of using this new approach directly there and enjoy. - -You can see the [RefForeignKey][reffk] as some sort of ***nested*** object. - -The beauty of [RefForeignKey][reffk] is the automatic conversion of dicts, so it is interoperable with many APIs. - -### Declare the models, views and ModelRef - -Let us create the models, views and ModelRef for our `/create` API to use. +Use `RefForeignKey` as a nested object in your API. +### Declare Models, Views, and ModelRef ```python title="app.py" {!> ../docs_src/reffk/apis/complex_example.py !} ``` -See that we are adding some extra information in the response of our `/create` API just to make -sure you can then check the results accordingly. - -### Making the API call - -Now that we have everything in place, its time to create a `user` and at the same time create some -`posts` directly. +### Making the API Call ```python {!> ../docs_src/reffk/apis/api_call.py !} ``` -Now this is a beauty, isn't it? Now we can see the advantage of having the ModelRef. The API call -it is so much cleaner and simple and nested that one API makes it all. - -**The response** - -The if you check the response, you should see something similar to this. +**Response:** ```json { @@ -297,13 +219,9 @@ The if you check the response, you should see something similar to this. } ``` -Remember adding the `comment` and `total_posts`? Well this is why, just to confirm the total inserted -and the comment of the first inserted, - #### Errors -As per normal Pydantic validations, if you send the wrong payload, it will raise the corresponding -errors, for example: +Pydantic validations apply: ```json { @@ -314,8 +232,7 @@ errors, for example: } ``` -This will raise a `ValidationError` as the `posts` are **not null**, as expected and you should -have something similar to this as response: +Response: ```json { @@ -331,12 +248,9 @@ have something similar to this as response: } ``` -##### Sending the wrong type +##### Wrong Type -The [RefForeignKey][reffk] is **always expecting a list** to be sent, if you try to send the wrong -type, it will raise a `ValidationError`, something similar to this: - -**If we have sent a dictionary instead of a list** +`RefForeignKey` expects a list: ```json { @@ -349,11 +263,4 @@ type, it will raise a `ValidationError`, something similar to this: ## Conclusion -This is an extensive document just for one field type but it deserves as it is complex and allows -you to simplify a lot your code when you want to **insert** records in the database all in one go. - - -[models]: ./models.md -[reffk]: #refforeignkey -[model_ref]: #modelref -[esmerald]: https://esmerald.dev +`RefForeignKey` and `ModelRef` simplify database record insertion, especially in APIs. diff --git a/docs/registry.md b/docs/registry.md index 779df1bc..e50e5afa 100644 --- a/docs/registry.md +++ b/docs/registry.md @@ -1,14 +1,10 @@ # Registry -When using the **Edgy** ORM, you must use the **Registry** object to tell exactly where the -database is going to be. +When working with the **Edgy** ORM, the **Registry** object is essential for specifying the database connection. -Imagine the registry as a mapping between your models and the database where is going to be written. +Think of the registry as a mapping between your models and the database where data will be stored. -And is just that, nothing else and very simple but effective object. - -The registry is also the object that you might want to use when generating migrations using -Alembic. +It's a simple yet effective object with a crucial role. The registry is also used for generating migrations with Alembic. ```python hl_lines="19" {!> ../docs_src/registry/model.py !} @@ -16,16 +12,12 @@ Alembic. ## Parameters -* **database** - An instance of `edgy.core.db.Database` object or a string. When providing a string all unparsed keyword arguments are passed the created Database object. +* **database**: An instance of `edgy.core.db.Database` or a connection string. When using a string, unparsed keyword arguments are passed to the created `Database` object. -!!! Warning - Using the `Database` from the `databases` package will raise an assertation error. Edgy is build on the - fork `databasez` and it is strongly recommended to use a string, `edgy.Database` or `edgy.testclient.TestClient` instead. - In future we may add more edgy specific functionality. + !!! Warning + Using `Database` from the `databases` package raises an assertion error. Edgy uses the `databasez` fork, and it's recommended to use a string, `edgy.Database`, or `edgy.testclient.TestClient`. Future versions may add more Edgy-specific functionality. -* **schema** - The schema to connect to. This can be very useful for multi-tenancy applications if -you want to specify a specific schema or simply if you just want to connect to a different schema -that is not the default. +* **schema**: The schema to connect to. Useful for multi-tenancy applications or connecting to non-default schemas. ```python from edgy import Registry @@ -33,42 +25,33 @@ that is not the default. registry = Registry(database=..., schema="custom-schema") ``` -* **extra** - A dictionary with extra connections (same types like the database argument) which are managed by the registry too (connecting/disconnecting). They may can be arbitary connected databases. It is just ensured that they are not tore down during the registry is connected. - -* **with_content_type** - Either a bool or a custom abstract ContentType prototype. This enables ContentTypes and saves the actual used type as attribute: `content_type` +* **extra**: A dictionary of extra connections (same types as the `database` argument) managed by the registry (connecting/disconnecting). They can be arbitrary connected databases. It ensures they're not torn down while the registry is connected. +* **with_content_type**: A boolean or a custom abstract `ContentType` prototype. Enables `ContentTypes` and saves the used type as the `content_type` attribute. ## Connecting/Disconnecting -Registries support the async contextmanager protocol as well as the ASGI lifespan protocol. -This way all databases specified as database or extra are properly referenced and dereferenced -(triggering the initialization and tear down routines when reaching 0). -This way all of the dbs can be safely used no matter if they are used in different contexts. +Registries support the asynchronous context manager protocol and the ASGI lifespan protocol. This ensures all databases specified in `database` or `extra` are properly referenced and dereferenced (triggering initialization and teardown when the reference count reaches 0). This allows safe use of databases across different contexts. -## Accessing the ContentType +## Accessing ContentType -The registry has an attribute `content_type` for accessing the active ContentType. +The registry has a `content_type` attribute for accessing the active `ContentType`. -## Accessing directly the databases +## Direct Database Access -The registry has an attribute `database` for the main database and a dictionary `extra` containing the active extra -databases. -It is not necessary anymore to keep the Database object available, it can be simply retrieved from the db which is by the way -safer. This way it is ensured you get the right one. +The registry has a `database` attribute for the main database and an `extra` dictionary for extra databases. Retrieving the `Database` object from the registry is safer and ensures you get the correct instance. -## Custom registry +## Custom Registry -Can you have your own custom Registry? Yes, of course! You simply need to subclass the `Registry` -class and continue from there like any other python class. +You can create custom registries by subclassing the `Registry` class. ```python hl_lines="15 29" {!> ../docs_src/registry/custom_registry.py !} ``` -## Multiple registries +## Multiple Registries -Sometimes you might want to work with multiple databases across different functionalities and -that is also possible thanks to the registry with [Meta](./models.md#the-meta-class) combination. +You can work with multiple databases across different functionalities using multiple registries with [Meta](./models.md#the-meta-class) combinations. ```python hl_lines="26 33" {!> ../docs_src/registry/multiple.py !} @@ -76,21 +59,19 @@ that is also possible thanks to the registry with [Meta](./models.md#the-meta-cl ## Schemas -This is another great supported feature from Edgy. This allows you to manipulate database schema -operations like [creating schemas](#create-schema) or [dropping schemas](#drop-schema). +Edgy supports database schema operations like [creating schemas](#create-schema) and [dropping schemas](#drop-schema). -This can be particulary useful if you want to create a [multi-tenancy](./tenancy/edgy.md) application -and you need to generate schemas for your own purposes. +This is useful for multi-tenancy applications or custom schema management. -### Create schema +### Create Schema -As the name suggests, it is the functionality that allows you to create database schemas. +Creates database schemas. **Parameters**: -* **schema** - String name of the schema. -* **if_not_exists** - Flag indicating if should create if not exists. -* **databases** - String or None for main database. You can create schemes on databases in extra too. +* **schema**: String name of the schema. +* **if_not_exists**: Flag to create if the schema doesn't exist. +* **databases**: String or `None` for the main database. You can create schemas on databases in `extra` too. Default: `False` @@ -98,57 +79,48 @@ As the name suggests, it is the functionality that allows you to create database {!> ../docs_src/registry/create_schema.py !} ``` -Create a schema called `edgy`. +Create a schema named `edgy`. ```python await create_schema("edgy") ``` -This will make sure it will create a new schema `edgy` if it does not exist. If the `if_not_exists` -is `False` and the schema already exists, it will raise a `edgy.exceptions.SchemaError`. +This creates the `edgy` schema if it doesn't exist. If `if_not_exists` is `False` and the schema exists, it raises `edgy.exceptions.SchemaError`. -### Drop schema +### Drop Schema -As name also suggests, it is the opposite of [create_schema](#create-schema) and instead of creating -it will drop it from the database. +Drops database schemas. !!! Warning - You need to be very careful when using the `drop_schema` as the consequences are irreversible - and not only you don't want to remove the wrong schema but also you don't want to delete the - `default` schema as well. Use it with caution. + Use `drop_schema` with caution, as it's irreversible. Avoid deleting the `default` schema. **Parameters**: -* **schema** - String name of the schema. -* **cascade** - Flag indicating if should do `cascade` delete. -* +* **schema**: String name of the schema. +* **cascade**: Flag for cascade delete. + Default: `False` -* **if_exists** - Flag indicating if should create if not exists. +* **if_exists**: Flag to drop if the schema exists. Default: `False` -* **databases** - String or None for main database. You can drop schemes on databases in extra too. +* **databases**: String or None for main database. You can drop schemes on databases in extra too. ```python hl_lines="11" {!> ../docs_src/registry/drop_schema.py !} ``` -Drop a schema called `edgy` +Drop a schema named `edgy`. ```python await drop_schema("edgy") ``` -This will make sure it will drop a schema `edgy` if exists. If the `if_exists` -is `False` and the schema does not exist, it will raise a `edgy.exceptions.SchemaError`. +This drops the `edgy` schema if it exists. If `if_exists` is `False` and the schema doesn't exist, it raises `edgy.exceptions.SchemaError`. -### Get default schema name +### Get Default Schema Name -This is just a helper. Each database has its own ***default*** schema name, for example, -Postgres calls it `public` and MSSQLServer calls it `dbo`. - -This is just an helper in case you need to know the default schema name for any needed purpose of -your application. +Helper function to get the default schema name for the database (e.g., `public` for Postgres, `dbo` for MSSQL). ```python hl_lines="11" {!> ../docs_src/registry/default_schema.py !} @@ -158,35 +130,31 @@ your application. {!> ../docs_src/shared/extra.md !} +## Laziness -## Lazyness - -Note: this is something for really advanced users who want to control the lazyness of `meta` objects. Skip if you just want use the framework -and don't want to micro-optimize your code. +For advanced users who want to control the laziness of `meta` objects. -Registry objects have two helper functions which can undo the lazyness (for optimizations or in case of an environment which requires everything being static after init.): +Registry objects have helper functions to undo laziness (for optimizations or static environments): -**init_models(self, *, init_column_mappers=True, init_class_attrs=True)** - Fully initializes models and metas. Some elements can be excluded from initialization by providing False to the keyword argument. +* **init_models(self, \*, init_column_mappers=True, init_class_attrs=True)**: Fully initializes models and metas. Exclude elements by setting keyword arguments to `False`. +* **invalidate_models(self, \*, clear_class_attrs=True)**: Invalidates metas and removes cached class attributes. Exclude sub-components from invalidation. -**invalidate_models(self, *, clear_class_attrs=True)** - Invalidates metas and removes cached class attributes. Single sub-components can be excluded from inval. +Model class attributes (`table`, `pknames`, `pkcolumns`, `proxy_model`, `table_schema`) are cleared or initialized. +Manual initialization is usually unnecessary and can cause performance penalties. -Model class attributes `class_attrs` which are cleared or initialized are `table`, `pknames`, `pkcolumns`, `proxy_model`, `table_schema` (only cleared). +`init_column_mappers` initializes `columns_to_field` via its `init()` method, which can be expensive for large models. -However in most cases it won't be necessary to initialize them manually and causes performance penalties. +## Callbacks -`init_column_mappers` initializes the `columns_to_field` via its `init()` method. This initializes the mappers `columns_to_field`, `field_to_columns` and `field_to_column_names`. This can be expensive for large models. +Use callbacks to modify models or specific models when they're available. +Register callbacks with a model name or `None` (for all models). When a model class is added, the callback is executed with the model class as a parameter. -## Callbacks +Callbacks can be permanent or one-time (triggered by the first match). If a model is already registered, it's passed to the callback. -Sometimes you want to modify all models or a specific model but aren't sure if they are available yet. -Here we have now the callbacks in a registry. -You register a callback with a model name or None (for all models) and whenever a model class of the criteria is added the callback with -the model class as parameter is executed. -Callbacks can be registered permanent or one time (the first match triggers them). If a model is already registered it is passed to the callback too. -The method is called `register_callback(model_or_name, callback, one_time)`. +Use `register_callback(model_or_name, callback, one_time)`. -Generally you use `one_time=True` for model specific callbacks and `one_time=False` for model unspecific callbacks. +Generally, use `one_time=True` for model-specific callbacks and `one_time=False` for model-unspecific callbacks. If `one_time` is not provided, the logic mentioned above is applied. diff --git a/docs/relationships.md b/docs/relationships.md index 0e6b8b52..e5360b5e 100644 --- a/docs/relationships.md +++ b/docs/relationships.md @@ -1,31 +1,25 @@ # Relationships -Creating relationships in **Edgy** is as simple as importing the fields and apply them into -the models. +Establishing relationships between models in **Edgy** is straightforward, involving importing the necessary fields and applying them to your models. -There are currently two types, the [ForeignKey](./fields/index.md#foreignkey) -and the [OneToOne](./fields/index.md#onetoone). +Edgy currently supports two relationship types: [ForeignKey](./fields/index.md#foreignkey) and [OneToOne](./fields/index.md#onetoone). -When declaring a foreign key, you can pass the value in two ways, as a string or as a model -object. Internally **Edgy** lookups up inside the [registry](./models.md#registry) and maps -your fields. +When defining a foreign key, you can specify the related model either as a string or as a model object. Edgy internally resolves the relationship using the [registry](./models.md#registry). -When declaring a model you can have one or more ForeignKey pointing to different tables or -multiple foreign keys pointing to the same table as well. +A model can have one or more foreign keys pointing to different tables or multiple foreign keys referencing the same table. !!! Tip - Have a look at the [related name](./queries/related-name.md) documentation to understand how - you can leverage reverse queries with foreign keys. + Refer to the [related name](./queries/related-name.md) documentation to learn how to leverage reverse queries with foreign keys. ## ForeignKey -Let us define the following models `User` and `Profile`. +Let's define two models, `User` and `Profile`. ```python {!> ../docs_src/relationships/model.py !} ``` -Now let us create some entries for those models. +Now, let's create some entries for these models. ```python user = await User.query.create(first_name="Foo", email="foo@bar.com") @@ -35,47 +29,43 @@ user = await User.query.create(first_name="Bar", email="bar@foo.com") await Profile.query.create(user=user) ``` -### Multiple foreign keys pointing to the same table +### Multiple Foreign Keys Pointing to the Same Table -What if you want to have multiple foreign keys pointing to the same model? This is also easily -possible to achieve. +You can have multiple foreign keys referencing the same model. ```python hl_lines="20-29" {!> ../docs_src/relationships/multiple.py !} ``` !!! Tip - Have a look at the [related name](./queries/related-name.md) documentation to understand how - you can leverage reverse queries with foreign keys withe the - [related_name](./queries/related-name.md#related_name-attribute). + Refer to the [related name](./queries/related-name.md) documentation to understand how to leverage reverse queries with foreign keys using the [related_name](./queries/related-name.md#related_name-attribute) attribute. -### Load an instance without the foreign key relationship on it +### Load an Instance Without the Foreign Key Relationship Populated ```python profile = await Profile.query.get(id=1) -# We have an album instance, but it only has the primary key populated +# We have a profile instance, but it only has the primary key populated print(profile.user) # User(id=1) [sparse] print(profile.user.pk) # 1 print(profile.user.email) # Raises AttributeError ``` -#### Load recursive +#### Load Recursively -Especcially in connection with model_dump it is helpful to populate all foreign keys. -You can use `load_recursive` for that. +Especially when using `model_dump`, it's helpful to populate all foreign keys. You can use `load_recursive` for this. ```python profile = await Profile.query.get(id=1) await profile.load_recursive() -# We have an album instance and all foreign key relations populated -print(profile.user) # User(id=1) [sparse] +# We have a profile instance and all foreign key relations populated +print(profile.user) # User(id=1) print(profile.user.pk) # 1 -print(profile.user.email) # ok +print(profile.user.email) # foo@bar.com ``` -### Load an instance with the foreign key relationship on it +### Load an Instance with the Foreign Key Relationship Populated ```python profile = await Profile.query.get(user__id=1) @@ -83,28 +73,24 @@ profile = await Profile.query.get(user__id=1) await profile.user.load() # loads the foreign key ``` -### Load an instance with the foreign key relationship on it with select related +### Load an Instance with the Foreign Key Relationship Populated Using `select_related` ```python profile = await Profile.query.select_related("user").get(id=1) -print(profile.user) # User(id=1) [sparse] +print(profile.user) # User(id=1) print(profile.user.pk) # 1 print(profile.user.email) # foo@bar.com ``` -### Access the foreign key values directly from the model +### Access Foreign Key Values Directly from the Model !!! Note - This is only possible since the version 0.9.0 of **Edgy**, before this version, the only way was - by using the [select_related](#load-an-instance-with-the-foreign-key-relationship-on-it-with-select-related) or - using the [load()](./queries/queries.md#load-the-foreign-keys-beforehand-with-select-related). + This is possible since Edgy version 0.9.0. Before this version, you had to use `select_related` or `load()`. -You can access the values of the foreign keys of your model directly via model instance without -using the [select_related](#load-an-instance-with-the-foreign-key-relationship-on-it-with-select-related) or -the [load()](./queries/queries.md#load-the-foreign-keys-beforehand-with-select-related). +You can access foreign key values directly from the model instance without using `select_related` or `load()`. -Let us see an example. +Let's see an example. **Create a user and a profile** @@ -122,72 +108,63 @@ print(profile.user.email) # "foo@bar.com" print(profile.user.first_name) # "Foo" ``` -## ForeignKey constraints +## ForeignKey Constraints -As mentioned in the [foreign key field](./fields/index.md#foreignkey), you can specify constraints in -a foreign key. +As mentioned in the [foreign key field](./fields/index.md#foreignkey) documentation, you can specify constraints for foreign keys. -The available values are `CASCADE`, `SET_NULL`, `RESTRICT` and those can also be imported -from `edgy`. +The available values are `CASCADE`, `SET_NULL`, and `RESTRICT`, which can be imported from `edgy`. ```python from edgy import CASCADE, SET_NULL, RESTRICT ``` -When declaring a foreign key or a one to one key, the **on_delete must be provided** or an -`AssertationError` is raised. +When defining a foreign key or one-to-one key, the `on_delete` parameter is **mandatory**. -Looking back to the previous example. +Looking back at the previous example: ```python hl_lines="20" {!> ../docs_src/relationships/model.py !} ``` -`Profile` model defines a `edgy.ForeignKey` to the `User` with `on_delete=edgy.CASCADE` which -means that whenever a `User` is deleted from the database, all associated `Profile` instances will -also be removed. +The `Profile` model defines an `edgy.ForeignKey` to `User` with `on_delete=edgy.CASCADE`. This means that whenever a `User` is deleted, all associated `Profile` instances will also be removed. -### Delete options +### Delete Options -* **CASCADE** - Remove all referencing objects. -* **RESTRICT** - Restricts the removing referenced objects. -* **SET_NULL** - This will make sure that when an object is deleted, the associated referencing -instances pointing to that object will set to null. When this `SET_NULL` is true, the `null=True` -must be also provided or an `AssertationError` is raised. +* **CASCADE**: Remove all referencing objects. +* **RESTRICT**: Restricts the removal of referenced objects. +* **SET_NULL**: Sets the referencing instance's foreign key to `null` when the referenced object is deleted. When using `SET_NULL`, `null=True` must also be provided. ## OneToOne -Creating an `OneToOneField` relationship between models is basically the same as the -[ForeignKey](#foreignkey) with the key difference that it uses `unique=True` on the foreign key -column. +Creating a `OneToOneField` relationship between models is similar to [ForeignKey](#foreignkey), with the key difference being that it uses `unique=True` on the foreign key column. ```python hl_lines="20" {!> ../docs_src/relationships/onetoone.py !} ``` -The same rules for this field are the same as the [ForeignKey](#foreignkey) as this derives from it. +The same rules apply to this field as to [ForeignKey](#foreignkey), as it derives from it. -Let us create a `User` and a `Profile`. +Let's create a `User` and a `Profile`. ```python user = await User.query.create(email="foo@bar.com") await Profile.query.create(user=user) ``` -Now creating another `Profile` with the same user will fail and raise an exception. +Creating another `Profile` with the same user will fail and raise an exception. ``` await Profile.query.create(user=user) ``` - ## Limitations -We cannot cross the database with a query, yet. -This means you can not join a MySQL table with a PostgreSQL table. +Edgy currently does not support cross-database queries. + +This means you cannot join a MySQL table with a PostgreSQL table. How can this be implemented? Of course joins are not possible. The idea is to execute a query on the child database and then check which foreign key values match. -Of course the ForeignKey has no constraint and if the data vanish it points to nowhere +Of course the ForeignKey has no constraint and if the data vanish it points to nowhere. diff --git a/docs/settings.md b/docs/settings.md index d9f7c947..192bdf74 100644 --- a/docs/settings.md +++ b/docs/settings.md @@ -1,117 +1,98 @@ -# Settings +# Settings in Edgy -Who never had that feeling that sometimes haing some database settings would be nice? Well, since -Edgy is from the same author of Esmerald and since Esmerald is [settings][esmerald_settings] oriented, why not apply -the same principle but in a simpler manner but to Edgy? - -This is exactly what happened. +Have you ever wished you could easily configure database settings? Since Edgy is created by the same author as Esmerald, and Esmerald is [settings][esmerald_settings] oriented, Edgy adopts a similar approach, albeit in a simpler form. ## Edgy Settings Module -The way of using the settings object within a Edgy use of the ORM is via: +Edgy uses the following environment variable to locate its settings: -* **EDGY_SETTINGS_MODULE** environment variable. +* **EDGY_SETTINGS_MODULE** -All the settings are **[Pydantic BaseSettings](https://pypi.org/project/pydantic-settings/)** objects which makes it easier to use and override -when needed. +All settings are **[Pydantic BaseSettings](https://pypi.org/project/pydantic-settings/)** objects, making them easy to use and override. ### EDGY_SETTINGS_MODULE -Edgy by default uses is looking for a `EDGY_SETTINGS_MODULE` environment variable to run and -apply the given settings to your instance. +Edgy looks for the `EDGY_SETTINGS_MODULE` environment variable to load and apply settings. -If no `EDGY_SETTINGS_MODULE` is found, Edgy then uses its own internal settings which are -widely applied across the system. +If `EDGY_SETTINGS_MODULE` is not found, Edgy uses its internal default settings. -#### Custom settings +#### Custom Settings -When creating your own custom settings class, you should inherit from `EdgySettings` (or the subclass `TenancySettings` in case of multi tenancy). `EdgySettings` is -the class responsible for all internal settings of Edgy and those can be extended and overriden -with ease. +To create custom settings, inherit from `EdgySettings` (or `TenancySettings` for multi-tenancy). `EdgySettings` handles Edgy's internal settings, which you can extend or override. -Something like this: +Example: ```python title="myproject/configs/settings.py" {!> ../docs_src/settings/custom_settings.py !} ``` -Super simple right? Yes and that is the intention. Edgy does not have a lot of settings but -has some which are used across the codebase and those can be overriden easily. +Edgy's settings are designed to be simple and easily overridable. !!! Danger - Be careful when overriding the settings as you might break functionality. It is your own risk - doing it. + Exercise caution when overriding settings, as it may break functionality. ##### Parameters -* **preloads** - List of imports preloaded. Non-existing imports are simply ignored. - Can be used to inject a path to a module in which the instance is set. - It takes strings in format `module` and `module:fn`. In the later case the function or callable is executed without arguments. +* **preloads**: List of imports to preload. Non-existent imports are ignored. Can be used to inject a path to a module in which the instance is set. Takes strings in format `module` and `module:fn`. In the latter case the function or callable is executed without arguments. Default: `[]` -* **extensions** - List of Monkay extensions for edgy. See [Extensions](./extensions.md) for more details. Extensions can of course also preload imports. +* **extensions**: List of Monkay extensions for Edgy. See [Extensions](./extensions.md) for details. Extensions can also preload imports. Default: `[]` -* **ipython_args** - List of arguments passed to `ipython` when starting the `edgy shell`. +* **ipython_args**: List of arguments passed to `ipython` when starting `edgy shell`. Default: `["--no-banner"]` -* **ptpython_config_file** - Config file to be loaded into `ptpython` when starting the `edgy shell --kernel ptpython`. +* **ptpython_config_file**: Config file loaded into `ptpython` when starting `edgy shell --kernel ptpython`. Default: `"~/.config/ptpython/config.py"` +#### How to Use It -#### How to use it +Similar to [Esmerald settings][esmerald_settings], Edgy uses the `EDGY_SETTINGS_MODULE` environment variable. -Similar to [esmerald settings][esmerald_settings], Edgy uses it in a similar way. - -Using the example [above](#custom-settings) and the location `myproject/configs/settings.py`, the -settings should be called like this: +Using the example from [above](#custom-settings) and the location `myproject/configs/settings.py`, the settings should be called like this: ```shell $ EDGY_SETTINGS_MODULE=myproject.configs.settings.MyCustomSettings edgy ``` -Optional prequesite: set one of the preload imports to the application path. This way you can skip -providing the `--app` parameter or providing the `EDGY_DEFAULT_APP`. +Optional prerequisite: set one of the preload imports to the application path. This way you can skip providing the `--app` parameter or providing the `EDGY_DEFAULT_APP`. Example: -**Starting the default shell** +**Starting the default shell:** ```shell $ EDGY_SETTINGS_MODULE=myproject.configs.settings.MyCustomSettings edgy shell ``` -**Starting the PTPython shell** +**Starting the PTPython shell:** ```shell $ EDGY_SETTINGS_MODULE=myproject.configs.settings.MyCustomSettings edgy shell --kernel ptpython ``` -**Creating the migrations folder** +**Creating the migrations folder:** ```shell $ EDGY_SETTINGS_MODULE=myproject.configs.settings.MyCustomSettings edgy init ``` -**Generating migrations** +**Generating migrations:** ```shell $ EDGY_SETTINGS_MODULE=myproject.configs.settings.MyCustomSettings edgy makemigrations ``` -**Appying migrations** +**Applying migrations:** ```shell $ EDGY_SETTINGS_MODULE=myproject.configs.settings.MyCustomSettings edgy migrate ``` -And the list goes on and on, you get the gist. To understand which commands are available, check -the [commands](./migrations/migrations.md) available to you and the [shell support](./shell.md) for -the Edgy shell support. - +And so on. To see available commands, check the [commands](./migrations/migrations.md) and [shell support](./shell.md). -[esmerald_settings]: https://esmerald.dev/application/settings/ +[esmerald_settings]: [https://esmerald.dev/application/settings/](https://esmerald.dev/application/settings/) diff --git a/docs/shell.md b/docs/shell.md index fb728d9f..d758c95d 100644 --- a/docs/shell.md +++ b/docs/shell.md @@ -1,60 +1,56 @@ +Absolutely! Let's expand on this section with thorough explanations for the end user. + # Shell Support -Who never needed to load a few database models ina command line or have the need to do it so and -got stuck trying to do it and wasted a lot of time? +Have you ever found yourself needing to quickly interact with your database models directly from the command line? Perhaps you wanted to test a query, inspect data, or perform some quick data manipulation without writing a full script. If you've struggled with setting up such an environment in the past, Edgy's shell support is designed to make your life easier. -Well, Edgy gives you that possibility completely out of the box and ready to use with your -application models. +Edgy provides an interactive Python shell that automatically loads your application's models, allowing you to seamlessly interact with your database. This feature is incredibly useful for development, debugging, and exploration. !!! Warning - Be aware of the use of this special class in production! It is advised not to use it there. + While the Edgy shell is a powerful tool, it's generally not recommended for use in production environments. Its primary purpose is for development and debugging. + +## Important: Application Discovery -## Important +Before diving into the shell, it's crucial to understand how Edgy discovers your application. The shell relies on the same discovery mechanisms used by Edgy's migration system. -Before reading this section, you should get familiar with the ways Edgy handles the discovery -of the applications. +The following examples will primarily demonstrate the [auto-discovery](./migrations/discovery.md#auto-discovery) approach, but the concepts are equally applicable to the [--app and environment variables](./migrations/discovery.md#environment-variables) method. -The following examples and explanations will be using the [auto discovery](./migrations/discovery.md#auto-discovery) -but [--app and environment variables](./migrations/discovery.md#environment-variables) approach but the -is equally valid and works in the same way. +## How It Works: Behind the Scenes +Edgy's shell functionality is designed to be user-friendly, abstracting away much of the underlying complexity. Here's a simplified breakdown of what happens when you launch the Edgy shell: -## How does it work +1. **Application Discovery:** Edgy uses the same logic as its migration system to locate your application. This involves identifying the application where your Edgy models are defined. +2. **Registry Extraction:** Once the application is located, Edgy extracts the [registry](./registry.md) object. The registry is responsible for managing your database connection and model definitions. +3. **Model Loading:** Edgy then automatically loads all your defined [models](./models.md) and [reflected models](./reflection/reflection.md) into the interactive Python shell's namespace. This makes them readily available for you to use. +4. **Shell Initialization:** Finally, Edgy initializes the interactive Python shell, providing you with a ready-to-use environment for interacting with your models. -Edgy ecosystem is complex internally but simpler to the user. Edgy will use the application -using the [migration](./migrations/migrations.md#migration) and automatically extract the -[registry](./registry.md) from it. +This process ensures that your shell environment is correctly configured and that all your models are accessible, saving you the time and effort of manually setting up these components. -From there it will automatically load the [models](./models.md) and [reflected models](./reflection/reflection.md) -into the interactive python shell and load them for you with ease 🎉. +### Requirements: Installing Interactive Shells -### Requirements +Edgy's shell support integrates with popular interactive Python shells, specifically `ipython` and `ptpython`. To use the Edgy shell, you'll need to have one or both of these installed. -To run any of the available shells you will need `ipython` or `ptpython` or both installed. +**IPython:** -**IPython** +IPython is a powerful interactive shell that provides enhanced features like tab completion, syntax highlighting, and magic commands. + +To install IPython: ```shell $ pip install ipython ``` -or +**PTPython:** -```shell -$ pip install edgy[ipython] -``` +PTPython is another excellent interactive Python shell that offers features like auto-completion, syntax highlighting, and multiline editing. -**PTPython** +To install PTPython: ```shell $ pip install ptpython ``` -or - -```shell -$ pip install edgy[ptpyton] -``` +Having these shells installed enables you to choose your preferred interactive environment when using the Edgy shell. ### How to call it diff --git a/docs/signals.md b/docs/signals.md index 4e726b80..c63b6921 100644 --- a/docs/signals.md +++ b/docs/signals.md @@ -1,37 +1,20 @@ # Signals -Sometimes you might want to *listen* to a model event upon the save, meaning, you want to do a -specific action when something happens in the models. +In Edgy, signals provide a mechanism to "listen" to model events, triggering specific actions when events like saving or deleting occur. This is similar to Django's signals but also draws inspiration from Ormar's implementation, and leverages the `blinker` library for anonymous signals. -Django for instance has this mechanism called `Signals` which can be very helpful for these cases -and to perform extra operations once an action happens in your model. +## What are Signals? -Other ORMs did a similar approach to this and a fantastic one was Ormar which took the Django approach -to its own implementation. +Signals are used to execute custom logic when certain events happen within Edgy models. They enable decoupling of concerns, allowing you to perform actions like sending notifications, updating related data, or logging events without cluttering your model definitions. -Edgy being the way it is designed, got the inspiration from both of these approaches and also -supports the `Signal` from blinker. This is in blinker terminology called an anonymous signal. +## Default Signals -## What are Signals +Edgy provides default signals for common model lifecycle events, which you can use out of the box. -Signals are a mechanism used to trigger specific actions upon a given type of event happens within -the Edgy models. +### How to Use Them -The same way Django approaches signals in terms of registration, Edgy does it in the similar fashion using the blinker library. +The default signals are located in `edgy.core.signals`. Import them as follows: -## Default signals - -Edgy has default receivers for each model created within the ecosystem. Those can be already used -out of the box by you at any time. - -There are also [custom signals](#custom-signals) in case you want an "extra" besides the defaults -provided. - -### How to use them - -The signals are inside the `edgy.core.signals` and to import them, simply run: - -``` python +```python from edgy.core.signals import ( post_delete, post_save, @@ -44,8 +27,7 @@ from edgy.core.signals import ( #### pre_save -The `pre_save` is used when a model is about to be saved and triggered on `Model.save()` and -`Model.query.create` functions. +Triggered before a model is saved (during `Model.save()` and `Model.query.create()`). ```python pre_save(send: Type["Model"], instance: "Model") @@ -53,9 +35,7 @@ pre_save(send: Type["Model"], instance: "Model") #### post_save -The `post_save` is used after the model is already created and stored in the database, meaning, -when an instance already exists after `save`. This signal is triggered on `Model.save()` and -`Model.query.create` functions. +Triggered after a model is saved (during `Model.save()` and `Model.query.create()`). ```python post_save(send: Type["Model"], instance: "Model") @@ -63,8 +43,7 @@ post_save(send: Type["Model"], instance: "Model") #### pre_update -The `pre_update` is used when a model is about to receive the updates and triggered on `Model.update()` -and `Model.query.update` functions. +Triggered before a model is updated (during `Model.update()` and `Model.query.update()`). ```python pre_update(send: Type["Model"], instance: "Model") @@ -72,8 +51,7 @@ pre_update(send: Type["Model"], instance: "Model") #### post_update -The `post_update` is used when a model **already performed the updates** and triggered on `Model.update()` -and `Model.query.update` functions. +Triggered after a model is updated (during `Model.update()` and `Model.query.update()`). ```python post_update(send: Type["Model"], instance: "Model") @@ -81,8 +59,7 @@ post_update(send: Type["Model"], instance: "Model") #### pre_delete -The `pre_delete` is used when a model is about to be deleted and triggered on `Model.delete()` -and `Model.query.delete` functions. +Triggered before a model is deleted (during `Model.delete()` and `Model.query.delete()`). ```python pre_delete(send: Type["Model"], instance: "Model") @@ -90,8 +67,7 @@ pre_delete(send: Type["Model"], instance: "Model") #### post_delete -The `post_update` is used when a model **is already deleted** and triggered on `Model.delete()` -and `Model.query.delete` functions. +Triggered after a model is deleted (during `Model.delete()` and `Model.query.delete()`). ```python post_update(send: Type["Model"], instance: "Model") @@ -99,77 +75,54 @@ post_update(send: Type["Model"], instance: "Model") ## Receiver -The receiver is the function or action that you want to perform upon a signal being triggered, -in other words, **it is what is listening to a given event**. +A receiver is a function that executes when a signal is triggered. It "listens" for a specific event. -Let us see an example. Given the following model. +Example: Given the following model: ```python {!> ../docs_src/signals/receiver/model.py !} ``` -You can set a trigger to send an email to the registered user upon the creation of the record by -using the `post_save` signal. The reason for the `post_save` it it because the notification must -be sent **after** the creation of the record and not before. If it was before, the `pre_save` would -be the one to use. +You can send an email to a user upon creation using the `post_save` signal: ```python hl_lines="11-12" {!> ../docs_src/signals/receiver/post_save.py !} ``` -As you can see, the `post_save` decorator is pointing the `User` model, meaning, it is "listing" -to events on that same model. - -This is called **receiver**. - -You can use any of the [default signals](#default-signals) available or even create your own -[custom signal](#custom-signals). +The `@post_save` decorator specifies the `User` model, indicating it listens for events on that model. ### Requirements -When defining your function or `receiver` it must have the following requirements: +Receivers must meet the following criteria: -* Must be a **callable**. -* Must have `sender` argument as first parameter which corresponds to the model of the sending object. -* Must have ****kwargs** argument as parameter as each model can change at any given time. -* Must be `async` because Edgy model operations are awaited. +* Must be a callable (function). +* Must have `sender` as the first argument (the model class). +* Must have `**kwargs` to accommodate changes in model attributes. +* Must be `async` to match Edgy's async operations. -### Multiple receivers +### Multiple Receivers -What if you want to use the same receiver but for multiple models? Let us now add an extra `Profile` -model. +You can use the same receiver for multiple models: ```python {!> ../docs_src/signals/receiver/multiple.py !} ``` -The way you define the receiver for both can simply be achieved like this: - ```python hl_lines="11" {!> ../docs_src/signals/receiver/post_multiple.py !} ``` -This way you can match and do any custom logic without the need of replicating yourself too much and -keeping your code clean and consistent. +### Multiple Receivers for the Same Model -### Multiple receivers for the same model - -What if now you want to have more than one receiver for the same model? Practically you would put all -in one place but you might want to do something else entirely and split those in multiple. - -You can easily achieve this like this: +You can have multiple receivers for the same model: ```python {!> ../docs_src/signals/receiver/multiple_receivers.py !} ``` -This will make sure that every receiver will execute the given defined action. - +### Disconnecting Receivers -### Disconnecting receivers - -If you wish to disconnect the receiver and stop it from running for a given model, you can also -achieve this in a simple way. +You can disconnect a receiver to prevent it from running: ```python hl_lines="20 23" {!> ../docs_src/signals/receiver/disconnect.py !} @@ -177,72 +130,54 @@ achieve this in a simple way. ## Custom Signals -This is where things get interesting. A lot of time you might want to have your own `Signal` and -not relying only on the [default](#default-signals) ones and this perfectly natural and common. - -Edgy allows the custom signals to take place per your own design. +Edgy allows you to define custom signals, extending beyond the default ones. -Let us continue with the same example of the `User` model. +Continuing with the `User` model example: ```python {!> ../docs_src/signals/receiver/model.py !} ``` -Now you want to have a custom signal called `on_verify` specifically tailored for your `User` needs -and logic. - -So define it, you can simply do: +Create a custom signal named `on_verify`: ```python hl_lines="21" {!> ../docs_src/signals/custom.py !} ``` -Yes, this simple. You simply need to add a new signal `on_verify` to the model signals and the -`User` model from now on has a new signal ready to be used. +The `on_verify` signal is now available for the `User` model. !!! Danger - Keep in mind **signals are class level type**, which means it will affect all of the derived - instances coming from it. Be mindful when creating a custom signal and its impacts. + Signals are class-level attributes, affecting all derived instances. Use caution when creating custom signals. -Now you want to create a custom functionality to be listened in your new Signal. +Create a receiver for the custom signal: ```python hl_lines="21 30" {!> ../docs_src/signals/register.py !} ``` -Now not only you created the new receiver `trigger_notifications` but also connected it to the -the new `on_verify` signal. - -### Rewire signals +The `trigger_notifications` receiver is now connected to the `on_verify` signal. -To not call the default lifecycle signals you can overwrite them per class. -You can either overwrite some or use the `set_lifecycle_signals_from` method of the Broadcaster (signals) +### Rewire Signals -This can be used to not call the default lifecycle signals in signals but custom ones or to use namespaces. +To prevent default lifecycle signals from being called, you can overwrite them per class or use the `set_lifecycle_signals_from` method of the Broadcaster: -Lifecycle methods are the former mentioned signals -` ```python {!> ../docs_src/signals/rewire.py !} ``` +### How to Use It -### How to use it - -Now it is time to use the signal in a custom logic, after all it was created to make sure it is -custom enough for the needs of the business logic. - -For simplification, the example below will be a very simple logic. +Use the custom signal in your logic: ```python hl_lines="17" {!> ../docs_src/signals/logic.py !} ``` -As you can see, the `on_verify`, it is only triggered if the user is verified and not anywhere else. +The `on_verify` signal is triggered only when the user is verified. -### Disconnect the signal +### Disconnect the Signal -The process of disconnecting the signal is exactly the [same as before](#disconnecting-receivers). +Disconnecting a custom signal is the same as disconnecting a default signal: ```python hl_lines="10" {!> ../docs_src/signals/disconnect.py !} diff --git a/docs/tips-and-tricks.md b/docs/tips-and-tricks.md index e12d3d79..ad6b736b 100644 --- a/docs/tips-and-tricks.md +++ b/docs/tips-and-tricks.md @@ -1,32 +1,22 @@ -# Tips and tricks +# Tips and Tricks for Edgy -This part is dedicated to some code organisation within your application. +This section provides guidance on organizing your code, particularly within an [Esmerald](https://esmerald.dymmond.com) application. While the examples are Esmerald-centric, the principles apply to any framework you use with Edgy. -The examples are more focused on the [Esmerald](https://esmerald.dymmond.com) as the author is the -same but again, you can do the same in your favourite framework. +## Centralizing Database Connections -## Placing your connection in a centralised place +Declaring database connections repeatedly throughout your application can lead to redundancy and potential issues with object identity. By centralizing your connections, you ensure consistency and prevent the creation of unnecessary objects. -This is probably what you would like to do in your application since you don't want to declare -over and over again the same variables. +### Global Settings File -The main reason for that is the fact that every time you declare a [registry](./registry.md) or a -`database`, in fact you are generating a new object and this is not great if you need to access -the models used with the main registry, right? +A common approach is to store connection details in a global settings file. This is especially convenient with Esmerald, which provides easy access to settings throughout your application. -### Place the connection details inside a global settings file - -This is probably the easiest way to place the connection details and particulary for Esmerald since -it comes with a simple and easy way of accesing the settings anywhere in the code. - -Something simple like this: +Example: ```python hl_lines="20-28" {!> ../docs_src/tips/settings.py !} ``` -As you can see, now you have the `db_connection` in one place and easy to access from anywhere in -your code. In the case of Esmerald: +With this setup, you can access the `db_connection` from anywhere in your code. In Esmerald: ```python hl_lines="3" from esmerald.conf import settings @@ -34,74 +24,55 @@ from esmerald.conf import settings registry = settings.db_connection ``` -**But is this enough?**. No. - -As mentioned before, when assigning or creating a variable, python itself generates a new object -with a different `id` which can differ from each time you need to import the settings into the -needed places. - -We won't talk about this pythonic tricks as there are plenty of documentation on the web and better -suited for that same purpose. - -How do we solve this issue? Enters [lru_cache](#the-lru-cache). +However, merely placing the connection details in a settings file isn't sufficient to ensure object identity. Each time you access `settings.db_connection`, a new object is created. To address this, we use the `lru_cache` technique. -## The LRU cache +## The LRU Cache -LRU extends for **least recently used**. +LRU stands for "Least Recently Used." It's a caching technique that ensures functions with the same arguments return the same cached object. This prevents redundant object creation, which is crucial for maintaining consistent database connections. -A very common technique that aims to help caching certain pieces of functionality within your -codebase and making sure you **do not generate** extra objects and this is exactly what we need. - -Use the example above, let us now create a new file called `utils.py` where we will be applying -the `lru_cache` technique for our `db_connection`. +Create a `utils.py` file to apply the `lru_cache` to your `db_connection`: ```python title="utils.py" {!> ../docs_src/tips/lru.py !} ``` -This will make sure that from now on you will always use the same connection and registry within -your appliction by importing the `get_db_connection()` anywhere is needed. - -Note, you cannot do that if `get_db_connection()` is in the same file like the application entrypoint. -Here you can use a [`edgy.monkay.instance`](#excurse-the-edgymonkayinstance-sandwich) sandwich instead. +Now, you can import `get_db_connection()` anywhere in your application and always get the same connection and registry instance. -You can also read further the [Practical Example](#practical-example). +**Important:** You cannot place `get_db_connection()` in the same file as your application entry point. In such cases, use the [`edgy.monkay.instance`](#excurse-the-edgymonkayinstance-sandwich) sandwich technique. -## Excurse: The `edgy.monkay.instance` sandwich +## Excurse: The `edgy.monkay.instance` Sandwich -If you want to short down the code and concentrate in e.g. `main.py` you can also use manual post loads and do the initialization in -`get_application` this way: +If you prefer to consolidate your code within `main.py`, you can use manual post-loads and initialize connections within `get_application`. This involves: -1. Creating registry. -2. Assigning the Instance to edgy.instance via set_instance() but without app and skip extensions. -3. Post loading models. -4. Creating the main app. -5. Assigning the Instance to edgy.instance via set_instance() but with app. +1. Creating the registry. +2. Assigning the instance to `edgy.instance` using `set_instance()` (without app and skipping extensions). +3. Post-loading models. +4. Creating the main app. +5. Assigning the instance to `edgy.instance` using `set_instance()` (with app). -this looks like: +Example `main.py`: -```` python title="main.py" +````python title="main.py" {!> ../docs_src/tips/sandwich_main.py !} ```` -```` python title="myproject/models.py" +Example `myproject/models.py`: + +````python title="myproject/models.py" {!> ../docs_src/tips/sandwich_models.py !} ```` -The sandwich way has the disadvantage of having just one registry, while with the lru_cache way you can have many -registries in parallel and mix them. - +The sandwich method is limited to a single registry, while `lru_cache` allows for multiple parallel registries. -## Practical example +## Practical Example -Let us now assemble everything and generate an application that will have: +Let's assemble a practical application with: -* **User model** -* **Ready to generate** [migrations](./migrations/migrations.md) -* **Starts the database connections** +* A `User` model. +* [Migrations](./migrations/migrations.md) ready. +* Database connection setup. -For this example we will have the following structure (we won't be use using all of the files). -We won't be creating views as this is not the purpose of the example. +Project structure: ```shell . @@ -135,76 +106,50 @@ We won't be creating views as this is not the purpose of the example. └── urls.py ``` -This structure is generated by using the -[Esmerald directives](https://esmerald.dymmond.com/management/directives/) +### Settings -### The settings - -As mentioned before we will have a settings file with database connection properties assembled. -We have also `edgy_settings` defined (any name is possible). It will be used for the central configuration management +Define database connection properties in `settings.py`: ```python title="my_project/configs/settings.py" hl_lines="20-28 30-35" {!> ../docs_src/tips/settings.py !} ``` -### The utils +### Utils -Now we create the `utils.py` where we appy the [LRU](#the-lru-cache) technique. +Create `utils.py` with the `lru_cache` implementation: ```python title="myproject/utils.py" {!> ../docs_src/tips/lru.py !} ``` -Note: here we cannot just import settings. We should wait until `build_path` is called. - -### The models +**Note:** Importing settings directly is not possible here. Wait until `build_path` is called. -We can now start creating our [models](./models.md) and making sure we keep them always in the -same [registry](./registry.md) +### Models +Create models in `myproject/apps/accounts/models.py`: ```python title="myproject/apps/accounts/models.py" hl_lines="8 19" {!> ../docs_src/tips/models.py !} ``` -Here applied the [inheritance](./models.md#with-inheritance) to make it clean and more readable in -case we want even more models. - -As you could also notice, we are importing the `get_db_connection()` previously created. This is -now what we will be using everywhere. - -### Prepare the application to allow migrations +Use [inheritance](./models.md#with-inheritance) for cleaner code. Import `get_db_connection()` to ensure consistent registry usage. -Now it is time to tell the application that your models and migrations are managed by Edgy. -More information on [migrations](./migrations/migrations.md) where explains how to use it. +### Prepare for Migrations +Configure the application for Edgy migrations in `main.py`: ```python title="myproject/main.py" hl_lines="10 32 38-42 44" {!> ../docs_src/tips/migrations.py !} ``` -This will make sure that your application migrations are now managed by **Edgy**. +### Hook the Connection -### Hook the connection - -As a final step we now need to make sure we hook the [connection](./connection.md) in our -application. We use an approach for the central management of configuration via esmerald. For this we -provide a settings forwarder. -You can also remove the settings forward and manage edgy settings via environment variable too. +Hook the database connection in `main.py` using a settings forwarder for centralized configuration management: ```python title="myproject/main.py" hl_lines="32-38 40 48-52 54" {!> ../docs_src/tips/connection.py !} ``` -And this is it. - ## Notes -The above [example](#practical-example) shows how you could take leverage of a centralised place -to manage your connections and then use it across your application keeping your code always clean -not redundant and beautiful. - -This example is applied to any of your favourite frameworks and you can use as many and different -techniques as the ones you see fit for your own purposes. - -**Edgy is framework agnostic**. +This example demonstrates how to centralize connection management using `lru_cache` and settings files. Apply these techniques to your favorite frameworks and adapt them to your specific needs. Edgy is framework-agnostic, providing flexibility and consistency in your database interactions. diff --git a/docs/transactions.md b/docs/transactions.md index a47fcceb..cf2145a2 100644 --- a/docs/transactions.md +++ b/docs/transactions.md @@ -1,53 +1,36 @@ -# Transactions +# Transactions in Edgy -Edgy using `databases` package allows also the use of transacations in a very familiar way for -a lot of the users. - -You can see a transaction as atomic, which means, when you need to save everything or fail all. +Edgy, leveraging the `databasez` package, provides robust transaction support that will feel familiar to many developers. Transactions ensure atomicity, meaning that a series of database operations either all succeed or all fail, maintaining data consistency. !!! Tip - Check more information about [atomicity](https://en.wikipedia.org/wiki/Atomicity_(database_systems)#:~:text=An%20atomic%20transaction%20is%20an,rejecting%20the%20whole%20series%20outright) to get familiar with the concept. - -There are three ways of using the transaction in your application: - -* As a [decorator](#as-a-decorator) -* As a [context manager](#as-a-context-manager) + For a deeper understanding of atomicity, refer to the [Atomicity in Database Systems](https://en.wikipedia.org/wiki/Atomicity_(database_systems)#:~:text=An%20atomic%20transaction%20is%20an,rejecting%20the%20whole%20series%20outright) documentation. -The following explanations and examples will take in account the following: +Edgy offers three primary ways to manage transactions: -Let us also assume we want to create a `user` and a `profile` for that user in a simple endpoint. +The following examples will use a scenario where we create a `user` and a `profile` for that user within a single endpoint. !!! danger - If you are trying to setup your connection within your application and have faced some errors - such as `AssertationError: DatabaseBackend is not running`, please see the [connection](./connection.md) - section for more details and how to make it properly. + If you encounter `AssertionError: DatabaseBackend is not running`, please consult the [connection](./connection.md) section for proper connection setup. ```python {!> ../docs_src/transactions/models.py!} ``` -## As a decorator +## As a Decorator -This is probably one of the less common ways of using transactions but still very useful if you -want all of your endpoint to be atomic. +Using transactions as decorators is less common but useful for ensuring entire endpoints are atomic. -We want to create an endpoint where we save the `user` and the `profile` in one go. Since the -author of Edgy is the same as [Esmerald](https://esmerald.dymmond.com), it makes sense to use -it as example. - -**You can use whatever you want, from Starlette to FastAPI. It is your choice**. +Consider an Esmerald endpoint (but this can be any web framework) that creates a `user` and a `profile` in one atomic operation: ```python hl_lines="18" {!> ../docs_src/transactions/decorator.py!} ``` -As you can see, the whole endpoint is covered to work as one transaction. This cases are rare but -still valid to be implemented. +In this case, the `@transaction()` decorator ensures that the entire endpoint function executes within a single transaction. This approach is suitable for cases where all operations within a function must be atomic. -## As a context manager +## As a Context Manager -This is probably the most common use-case for the majority of the applications where within a view -or an operation, you will need to make some transactions that need atomacity. +Context managers are the most common way to manage transactions, especially when specific sections of code within a view or operation need to be atomic. It is recommended to use the model or queryset transaction method. This way the transaction of the right database is used. @@ -55,23 +38,22 @@ This way the transaction of the right database is used. {!> ../docs_src/transactions/context_manager.py!} ``` -It is also possible to use the current active database of a QuerySet: +Using the current active database of a QuerySet: ```python hl_lines="23" {!> ../docs_src/transactions/context_manager2.py!} ``` -Of course you can also access the database and start the transaction: +You can also access the database and start the transaction directly: ```python hl_lines="23" {!> ../docs_src/transactions/context_manager_direct.py!} ``` -## Important notes +This ensures that the operations within the `async with` block are executed atomically. If any operation fails, all changes are rolled back. + +## Important Notes -Edgy although running on the top of [Databasez](https://databasez.dymmond.com/) it varies in -many aspects and offers features unprovided by sqlalchemy. -For example the jdbc support or support for a mixed threading/async environment. +Edgy, while built on top of [Databasez](https://databasez.dymmond.com/), offers unique features beyond those provided by SQLAlchemy. These include JDBC support and compatibility with mixed threading/async environments. -If you are interested in knowing more about the low-level APIs of databasez, -[check out](https://github.com/dymmond/databasez) or [documentation](https://databasez.dymmond.com/). +For more information on the low-level APIs of Databasez, refer to the [Databasez repository](https://github.com/dymmond/databasez) and its [documentation](https://databasez.dymmond.com/).