DevSetGo Library
Description | Common functions for Python applications. This is to increase reusability and limit rewritting the same functions in multiple applications. It also allows for defects to be addressed quickly and then be propigated across applications. |
Author(s) | Mike Ryan |
Repository | https://github.com/devsetgo/dsg_lib |
Copyright | Copyright © 2016 - 2025 Mike Ryan |
Table of Contents
Support Python Versions
CI/CD Pipeline:
SonarCloud:
DevSetGo Common Library¶
devsetgo_lib
is a versatile library designed to provide common functions for Python applications. Its main goal is to increase reusability and reduce the need to rewrite the same functions across multiple applications. This also allows for quick defect resolution and propagation of fixes across all dependent projects.
Read the Full Documentation here.
Key Features¶
Common Functions:¶
-
File Operations:
- CSV, JSON, and Text File Functions: Create, read, write, and manipulate various file types with ease.
- Folder Functions: Create and remove directories, list directory contents, and manage file system operations efficiently.
-
File Moving: Move files from one directory to another, with an option to compress the file being moved.
-
Logging: Comprehensive logging setup using the Loguru Library. Provides extensive customization options for log configuration, including log rotation, retention, and formatting. Includes improvements for multiprocessing environments to ensure log messages are handled correctly across multiple processes.
-
Calendar Functions: Convert between month names and numbers seamlessly.
-
Pattern Matching: Powerful tools for searching patterns in text using regular expressions.
FastAPI Endpoints:¶
- Pre-built endpoints for system health checks, status, and uptime monitoring.
- Functions to generate HTTP response codes easily.
Async Database:¶
- Configuration and management of asynchronous database sessions.
- CRUD operations with async support.
Quick Reference¶
- Logging & Config Setup
- FastAPI Endpoints
- Calendar & Date Utilities
- Pattern Matching
- CSV & JSON Helpers
Installation¶
To install devsetgo_lib
, use pip:
pip install devsetgo-lib
# For async database setup with SQLite or PostgreSQL
pip install devsetgo-lib[sqlite]
pip install devsetgo-lib[postgres]
# Experimental support for other databases
pip install devsetgo-lib[oracle]
pip install devsetgo-lib[mssql]
pip install devsetgo-lib[mysql]
# For adding FastAPI endpoints
pip install devsetgo-lib[fastapi]
# Install everything
pip install devsetgo-lib[all]
Usage¶
Here's a quick example to demonstrate how you can use some of the key features of devsetgo_lib
:
from devsetgo_lib.common_functions import file_functions, logging_config, patterns, calendar_functions
# File Operations
file_functions.create_sample_files("example", 100)
content = file_functions.read_from_file("example.csv")
print(content)
# Logging
logging_config.config_log(logging_directory='logs', log_name='app.log', logging_level='DEBUG')
logger = logging.getLogger('app_logger')
logger.info("This is an info message")
# Pattern Matching
text = "Hello, my name is 'John Doe' and I live in 'New York'."
results = patterns.pattern_between_two_char(text, "'", "'")
print(results)
# Calendar Functions
print(calendar_functions.get_month(1)) # Output: 'January'
print(calendar_functions.get_month_number('January')) # Output: 1
For detailed documentation on each module and function, please refer to the official documentation.
Contributing¶
We welcome contributions! Please see our contributing guidelines for more details.
License¶
This project is licensed under the MIT License. See the LICENSE file for more details.
Contact¶
For any questions or issues, please open an issue on GitHub or contact us at devsetgo@example.com.
Quick Start¶
Install¶
pip install devsetgo-lib
# Aysync database setup
pip install devsetgo-lib[sqlite]
pip install devsetgo-lib[postgres]
# Consider these experimental and untested
pip install devsetgo-lib[oracle]
pip install devsetgo-lib[mssql]
pip install devsetgo-lib[mysql]
# For adding FastAPI endpoints
pip install devsetgo-lib[fastapi]
# Install everything
pip install devsetgo-lib[all]
See documentation for more examples of library use
Common Functions
Reference¶
dsg_lib.common_functions.logging_config
¶
This module provides a comprehensive logging setup using the loguru library, facilitating easy logging management for Python applications.
The config_log
function, central to this module, allows for extensive customization of logging behavior. It supports specifying the logging directory, log file name, logging level, and controls for log rotation, retention, and formatting among other features. Additionally, it offers advanced options like backtrace and diagnose for in-depth debugging, and the ability to append the application name to the log file for clearer identification.
Usage example:
from dsg_lib.common_functions.logging_config import config_log
config_log(
logging_directory='logs', # Directory for storing logs
log_name='log', # Base name for log files
logging_level='DEBUG', # Minimum logging level
log_rotation='100 MB', # Size threshold for log rotation
log_retention='30 days', # Duration to retain old log files
enqueue=True, # Enqueue log messages
)
# Example log messages
logger.debug("This is a debug message")
logger.info("This is an info message")
logger.error("This is an error message")
logger.warning("This is a warning message")
logger.critical("This is a critical message")
Todo
- Add support for additional logging handlers.
- Implement asynchronous logging.
Date Created
2021/07/16
Date Updated
2024/07/27
License
MIT
SafeFileSink
¶
A class to handle safe file logging with rotation and retention policies.
This class provides mechanisms to manage log files by rotating them based on size and retaining them for a specified duration. It also supports optional compression of log files.
Attributes:
Name | Type | Description |
---|---|---|
|
|
The path to the log file. |
|
|
The size threshold for log rotation in bytes. |
|
|
The duration to retain old log files. |
|
|
The compression method to use for old log files. |
Methods:
Name | Description |
---|---|
|
Parses a size string (e.g., '100MB') and returns the size in bytes. |
|
Parses a duration string (e.g., '7 days') and returns a timedelta object. |
Example
safe_file_sink = SafeFileSink( path='logs/app.log', rotation='100 MB', retention='30 days', compression='zip' )
This will set up a log file at 'logs/app.log' with rotation at 100 MB,¶
retention for 30 days, and compression using zip.¶
Source code in dsg_lib/common_functions/logging_config.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 |
|
__call__(message)
¶
Handles the logging of a message, including writing, rotating, and applying retention policies.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
message
|
|
The log message to be written. |
required |
This method ensures thread-safe logging by acquiring a lock before writing the message, rotating the logs if necessary, and applying the retention policy to remove old log files.
Source code in dsg_lib/common_functions/logging_config.py
135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
|
apply_retention()
¶
Applies the retention policy to remove old log files.
This method iterates through the log files in the directory of the current log file. It checks the modification time of each log file and removes those that are older than the specified retention period.
Returns:
Type | Description |
---|---|
None |
Raises:
Type | Description |
---|---|
|
If there is an error removing a log file. |
Source code in dsg_lib/common_functions/logging_config.py
190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 |
|
parse_duration(duration_str)
staticmethod
¶
Parses a duration string and returns a timedelta object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
duration_str
|
|
The duration string (e.g., '7 days'). |
required |
Returns:
Name | Type | Description |
---|---|---|
timedelta |
The duration as a timedelta object. |
Source code in dsg_lib/common_functions/logging_config.py
114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
|
parse_size(size_str)
staticmethod
¶
Parses a size string and returns the size in bytes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
size_str
|
|
The size string (e.g., '100MB'). |
required |
Returns:
Name | Type | Description |
---|---|---|
int |
The size in bytes. |
Source code in dsg_lib/common_functions/logging_config.py
93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
|
rotate_logs()
¶
Rotates the log file if it exceeds the specified rotation size.
This method checks the size of the current log file. If the file size exceeds the specified rotation size, it renames the current log file by appending a timestamp to its name. Optionally, it compresses the rotated log file using the specified compression method and removes the original uncompressed file.
Returns:
Type | Description |
---|---|
None |
Raises:
Type | Description |
---|---|
|
If there is an error renaming or compressing the log file. |
Source code in dsg_lib/common_functions/logging_config.py
162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 |
|
write_message(message)
¶
Writes a log message to the log file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
message
|
|
The log message to be written. |
required |
This method opens the log file in append mode and writes the message to it.
Source code in dsg_lib/common_functions/logging_config.py
150 151 152 153 154 155 156 157 158 159 160 |
|
config_log(logging_directory='log', log_name='log', logging_level='INFO', log_rotation='100 MB', log_retention='30 days', log_backtrace=False, log_format=None, log_serializer=False, log_diagnose=False, app_name=None, append_app_name=False, enqueue=True, intercept_standard_logging=True, compression='zip')
¶
Configures the logging settings for the application.
This function sets up the logging configuration, including the log directory, log file name, logging level, log rotation, retention policies, and other optional settings.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
logging_directory
|
|
The directory where log files will be stored. Defaults to "log". |
'log'
|
log_name
|
|
The base name of the log file. Defaults to "log". |
'log'
|
logging_level
|
|
The logging level (e.g., "INFO", "DEBUG"). Defaults to "INFO". |
'INFO'
|
log_rotation
|
|
The size threshold for log rotation (e.g., "100 MB"). Defaults to "100 MB". |
'100 MB'
|
log_retention
|
|
The duration to retain old log files (e.g., "30 days"). Defaults to "30 days". |
'30 days'
|
log_backtrace
|
|
Whether to include backtrace information in logs. Defaults to False. |
False
|
log_format
|
|
The format string for log messages. Defaults to a predefined format if not provided. |
None
|
log_serializer
|
|
Whether to serialize log messages. Defaults to False. |
False
|
log_diagnose
|
|
Whether to include diagnostic information in logs. Defaults to False. |
False
|
app_name
|
|
The name of the application. Defaults to None. |
None
|
append_app_name
|
|
Whether to append the application name to the log file name. Defaults to False. |
False
|
enqueue
|
|
Whether to enqueue log messages for asynchronous processing. Defaults to True. |
True
|
intercept_standard_logging
|
|
Whether to intercept standard logging calls. Defaults to True. |
True
|
compression
|
|
The compression method for rotated log files (e.g., "zip"). Defaults to 'zip'. |
'zip'
|
Returns:
Type | Description |
---|---|
None |
Example
config_log( logging_directory='logs', log_name='app_log', logging_level='DEBUG', log_rotation='50 MB', log_retention='7 days', log_backtrace=True, log_format='{time} - {level} - {message}', log_serializer=True, log_diagnose=True, app_name='MyApp', append_app_name=True, enqueue=False, intercept_standard_logging=False, compression='gz' )
This will configure the logging settings with the specified parameters, setting up a log file at 'logs/app_log' with rotation at 50 MB, retention for 7 days, and other specified options.
Source code in dsg_lib/common_functions/logging_config.py
217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 |
|
Reference¶
dsg_lib.common_functions.file_functions
¶
file_functions.py
This module provides a function to delete a file with a specified name from a specified directory.
Functions:
Name | Description |
---|---|
|
str) -> str:
Deletes a file with the specified file name from the directory specified
by the Args: file_name (str): The name of the file to be deleted. Returns: str: A string indicating that the file has been deleted. Raises: TypeError: If the file name is not a string. ValueError: If the file name contains a forward slash or backslash, or if the file type is not supported. FileNotFoundError: If the file does not exist. |
Example:
from dsg_lib.common_functions import file_functions
file_functions.delete_file("test.csv")
# Outputs: 'complete'
Author: Mike Ryan Date: 2024/05/16 License: MIT
append_csv(file_name, data, root_folder=None, delimiter=',', quotechar='"')
¶
Appends a list of rows to an existing CSV file with the specified file name
in the specified directory. Each element of the data
list should be a row
(list of values), and the header in data[0]
must match the existing CSV's
header.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file_name
|
|
The name of the CSV file to append data to. Can be |
required |
provided
|
without the '.csv' extension. data (list
|
Rows to append |
required |
optional)
|
The root directory where the file is located. If None, the |
required | |
default
|
directory is used. Defaults to None. delimiter (str
|
|
required |
quotechar
|
|
The character used to quote fields in the CSV |
'"'
|
Returns:
Name | Type | Description |
---|---|---|
str |
|
Returns "appended" if the rows were successfully appended. |
Raises:
Type | Description |
---|---|
|
If the CSV file does not exist. |
|
If the header row in |
header in the file. TypeError
|
If |
Example:
from dsg_lib.common_functions import file_functions
csv_rows = [
["column1", "column2"],
["appended_value1", "appended_value2"]
]
result = file_functions.append_csv(
file_name="test.csv",
data=csv_rows,
root_folder="/path/to/directory"
)
# result would be "appended" on success
Additional usage info
- Ideal for appending more rows to an existing CSV with matching header.
- Defaults to "data/csv" if no root_folder is provided.
- You can supply any valid file path in root_folder to override.
Source code in dsg_lib/common_functions/file_functions.py
324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 |
|
create_sample_files(file_name, sample_size)
¶
Create sample CSV and JSON files with random data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file_name
|
|
The base name for the sample files (without extension). |
required |
sample_size
|
|
The number of rows to generate for the sample files. |
required |
Returns:
Type | Description |
---|---|
None
|
None |
Raises:
Type | Description |
---|---|
|
If an error occurs while creating the sample files. |
Example:
from dsg_lib.common_functions import file_functions
file_functions.create_sample_files(file_name="test.csv", sample_size=100)
# Creates 'test.csv' and 'test.json' each with 100 rows of random data
Source code in dsg_lib/common_functions/file_functions.py
632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 |
|
delete_file(file_name)
¶
Deletes a file with the specified file name from the specified directory. The file type is determined by the file extension.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
directory_to_files
|
|
The directory where the file is located. |
required |
file_name
|
|
The name of the file to be deleted. |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
|
A message indicating whether the file has been deleted successfully |
|
or an error occurred. |
Raises:
Type | Description |
---|---|
|
If the directory or file name is not a string. ValueError: If |
is not supported. FileNotFoundError
|
If the file does not exist. |
Example:
from dsg_lib.common_functions import file_functions
file_functions.delete_file(file_name="test.csv")
# Outputs: 'complete'
Source code in dsg_lib/common_functions/file_functions.py
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
|
generate_random_date()
¶
Generate a random datetime string in the format yyyy-mm-dd hh:mm:ss.ffffff.
Returns:
Name | Type | Description |
---|---|---|
str |
|
A randomly generated datetime string. |
Example:
from dsg_lib.common_functions import file_functions
random_timestamp = file_functions.generate_random_date()
# random_timestamp might look like '1992-03-15 10:30:45.123456'
Source code in dsg_lib/common_functions/file_functions.py
703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 |
|
open_csv(file_name, delimiter=',', quote_level='minimal', skip_initial_space=True, **kwargs)
¶
Opens a CSV file with the specified file name and returns its contents as a list of dictionaries.
Source code in dsg_lib/common_functions/file_functions.py
407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 |
|
open_json(file_name)
¶
Open a JSON file and load its contents into a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file_name
|
|
The name of the JSON file to open. |
required |
Returns:
Name | Type | Description |
---|---|---|
dict |
|
The contents of the JSON file as a dictionary. |
Raises:
Type | Description |
---|---|
|
If the file name is not a string. FileNotFoundError: If the |
Example:
from dsg_lib.common_functions import file_functions
result_dict = file_functions.open_json(file_name="test.json")
# result_dict is a dictionary loaded from 'test.json'
Source code in dsg_lib/common_functions/file_functions.py
191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 |
|
open_text(file_name)
¶
Opens a text file with the specified file name and returns its contents as a string.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file_name
|
|
The name of the file to open. Should include the '.txt' |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
|
The contents of the text file as a string. |
Raises:
Type | Description |
---|---|
|
If the |
forward slash. FileNotFoundError
|
If the file does not exist. |
Example:
from dsg_lib.common_functions import file_functions
text_content = file_functions.open_text(file_name="test.txt")
# text_content is the file's contents as a single string
Source code in dsg_lib/common_functions/file_functions.py
539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 |
|
save_csv(file_name, data, root_folder=None, delimiter=',', quotechar='"')
¶
Saves a list of dictionaries as a CSV file with the specified file name in the specified directory. Each dictionary in the list should represent a row in the CSV file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file_name
|
|
The name of the file to save the data in. Should |
required |
include
|
the '.csv' extension. data (list
|
The data to be saved. Each |
required |
optional)
|
The root directory where the file will be saved. If None, the |
required | |
(str,
|
|
The character used to separate fields in the CSV file. |
required |
Defaults
|
to ','. quotechar (str
|
The character used to quote |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
|
A message indicating whether the file has been saved successfully |
|
or an error occurred. |
Raises:
Type | Description |
---|---|
|
If the data is not a list, or the file name, delimiter, or |
quotechar is not a string. ValueError
|
If the file name does not end |
Example:
from dsg_lib.common_functions import file_functions
csv_data = [
["column1", "column2"],
["value1", "value2"]
]
file_functions.save_csv(file_name="test.csv", data=csv_data,root_folder="/path/to/directory", delimiter=";", quotechar="'")
# Saves data to '/path/to/directory/test.csv'
Source code in dsg_lib/common_functions/file_functions.py
240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 |
|
save_json(file_name, data, root_folder=None)
¶
Saves a dictionary or a list as a JSON file with the specified file name in the specified directory.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file_name
|
|
The name of the file to save the data in. Should |
required |
include
|
the '.json' extension. data (list or dict
|
The data to be |
required |
saved.
|
root_folder (str
|
The root directory where the file |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
|
A message indicating whether the file has been saved successfully |
|
or an error occurred. |
Raises:
Type | Description |
---|---|
|
If the data is not a list or a dictionary, or the file name |
or directory is not a string. ValueError
|
If the file name contains a |
Example:
from dsg_lib.common_functions import file_functions
json_data = {"key": "value"}
file_functions.save_json(file_name="test.json", data=json_data, root_folder="/path/to/directory")
# Saves data to '/path/to/directory/test.json'
Source code in dsg_lib/common_functions/file_functions.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 |
|
save_text(file_name, data, root_folder=None)
¶
Saves a string of text to a file with the specified file name in the specified directory.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file_name
|
|
The name of the file to save the data in. Should not |
required |
include
|
the '.txt' extension. data (str
|
The text data to be saved. |
required |
root_folder
|
|
The root directory where the file will be |
None
|
Returns:
Name | Type | Description |
---|---|---|
str |
|
A message indicating whether the file has been saved successfully |
|
or an error occurred. |
Raises:
Type | Description |
---|---|
|
If the |
contains a forward slash or backslash. FileNotFoundError
|
If the |
Example:
from dsg_lib.common_functions import file_functions
text_data = "This is a test text file."
file_functions.save_text(file_name="test.txt", data=text_data, root_folder="/path/to/directory")
# Saves data to '/path/to/directory/test.txt'
Source code in dsg_lib/common_functions/file_functions.py
469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 |
|
Reference¶
dsg_lib.common_functions.file_mover
¶
Module: file_mover Detailed file processing flow that continuously monitors and processes files from a source directory, optionally compresses them, and then moves them to a final destination. Ensures no files are lost during transfer.
Functions:
Name | Description |
---|---|
|
Continuously monitors the source directory for files matching the given pattern, moves them to a temporary directory, optionally compresses them, and then transfers them to the final directory. |
|
Path, temp_path: Path, final_path: Path, compress: bool) -> None: Handles the internal logic of moving and optionally compressing a single file. |
Usage Example:
from dsg_lib.common_functions.file_mover import process_files_flow
process_files_flow(
source_dir="/some/source",
temp_dir="/some/temp",
final_dir="/some/final",
file_pattern="*.txt",
compress=True
)
process_files_flow(source_dir, temp_dir, final_dir, file_pattern, compress=False, max_iterations=None)
¶
Continuously monitors a source directory for files. Moves files matching file_pattern to a temporary directory, optionally compresses them, then moves them to a final destination directory.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
source_dir
|
|
Path to the source directory to watch. |
required |
temp_dir
|
|
Path to the temporary directory for processing. |
required |
final_dir
|
|
Path to the final destination directory. |
required |
file_pattern
|
|
Glob pattern for matching files (e.g. "*.txt"). |
required |
compress
|
|
If True, compress files before moving. Defaults to False. |
False
|
max_iterations
|
|
Limit iterations in watch loop. Defaults to None. |
None
|
Returns:
Type | Description |
---|---|
None
|
None |
Raises:
Type | Description |
---|---|
|
Propagated if file operations fail. |
Example
process_files_flow("/source", "/temp", "/final", "*.pdf", compress=True)
Source code in dsg_lib/common_functions/file_mover.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
|
Reference¶
dsg_lib.common_functions.folder_functions
¶
This module contains functions for working with directories and files.
Functions:
Name | Description |
---|---|
|
Get the last modified file in a |
|
Get a list of directories in the |
|
Make a folder in a |
|
Remove a folder from the |
Example:
from dsg_lib.common_functions import folder_functions
# Get the last modified file in a directory time_stamp, file_path =
folder_functions.last_data_files_changed("/path/to/directory") # Returns:
(datetime.datetime(2022, 1, 1, 12, 0, 0), '/path/to/directory/test.txt')
# Get a list of directories in the specified directory directories =
folder_functions.get_directory_list("/path/to/directory") # Returns:
['/path/to/directory/dir1', '/path/to/directory/dir2']
# Make a folder in a specific directory
folder_functions.make_folder("/path/to/directory/new_folder") # Creates a new
folder at '/path/to/directory/new_folder'
# Remove a folder from the specified directory
folder_functions.remove_folder("/path/to/directory/old_folder") # Removes the
folder at '/path/to/directory/old_folder'
Author: Mike Ryan Date: 2024/05/16 License: MIT
get_directory_list(file_directory)
¶
Get a list of directories in the specified directory.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file_directory
|
|
The path of the directory to check. |
required |
Returns:
Type | Description |
---|---|
|
List[str]: A list of directories in the specified directory. |
Raises:
Type | Description |
---|---|
|
If the directory does not exist. |
Example:
from dsg_lib import file_functions
directories = file_functions.get_directory_list("/path/to/directory")
# Returns: ['/path/to/directory/dir1', '/path/to/directory/dir2']
Source code in dsg_lib/common_functions/folder_functions.py
101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
|
last_data_files_changed(directory_path)
¶
Get the last modified file in a directory and return its modification time and path.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
directory_path
|
|
The path of the directory to check. |
required |
Returns:
Type | Description |
---|---|
|
Tuple[datetime, str]: A tuple containing the modification time and path |
|
of the last modified file. |
Raises:
Type | Description |
---|---|
|
If the directory does not exist. |
Example:
from dsg_lib import file_functions
time_stamp, file_path = file_functions.last_data_files_changed("/path/to/directory")
# Returns: (datetime.datetime(2022, 1, 1, 12, 0, 0), '/path/to/directory/test.txt')
Source code in dsg_lib/common_functions/folder_functions.py
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
|
remove_folder(file_directory)
¶
Remove a folder from the specified directory.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file_directory
|
|
The directory containing the folder to be removed. |
required |
Returns:
Type | Description |
---|---|
None
|
None. |
Raises:
Type | Description |
---|---|
|
If the specified directory does not exist. OSError: |
Example:
from dsg_lib.common_functions import file_functions
file_functions.remove_folder("/path/to/directory/old_folder")
# Removes the folder at '/path/to/directory/old_folder'
Source code in dsg_lib/common_functions/folder_functions.py
187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 |
|
Reference¶
dsg_lib.common_functions.patterns
¶
This module contains functions for pattern searching in text using regular expressions.
The main function in this module is pattern_between_two_char
, which searches
for all patterns between two characters in a given string. The function uses
Python's built-in re
module for regex searching and the loguru
module for
logging.
Functions:
Name | Description |
---|---|
|
str, left_characters: str, |
|
str) -> dict: Searches for all patterns between two characters (left and right) in a given string using regular expressions. |
Example
from dsg_lib.common_functions import patterns
text = "Hello, my name is 'John Doe' and I live in 'New York'." left_char =
"'" right_char = "'"
results = patterns.pattern_between_two_char(text, left_char, right_char)
print(results) ``` This will output: ```python {
'found': ['John Doe', 'New York'], 'matched_found': 2,
'pattern_parameters': {
'left_character': "'", 'right_character': "'", 'regex_pattern':
"'(.+?)'", 'text_string': "Hello, my name is 'John Doe' and I live
in 'New York'."
}
}
Author: Mike Ryan Date: 2024/05/16 License: MIT
pattern_between_two_char(text_string, left_characters, right_characters)
¶
Searches for all patterns between two characters (left and right) in a given string using regular expressions.
This function takes a string and two characters as input, and returns a dictionary containing all patterns found between the two characters in the string. The dictionary also includes the number of matches found and the regex pattern used for searching.
The function uses Python's built-in re
module for regex searching and the
loguru
module for logging.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
text_string
|
|
The string in which to search for patterns. |
required |
left_characters
|
|
The character(s) that appear(s) immediately to |
required |
the
|
left of the desired pattern. right_characters (str
|
The |
required |
Returns:
Name | Type | Description |
---|---|---|
dict |
|
A dictionary with the following keys: - "found": a list of strings containing all patterns found. - "matched_found": the number of patterns found. - "pattern_parameters": a dictionary with the following keys: - "left_character": the escaped left character string used to build the regex pattern. - "right_character": the escaped right character string used to build the regex pattern. - "regex_pattern": the final regex pattern used for searching. - "text_string": the escaped input string used for searching. |
Example
from dsg_lib.common_functions import patterns
text = "Hello, my name is 'John Doe' and I live in 'New York'."
left_char = "'" right_char = "'"
results = patterns.pattern_between_two_char(text, left_char, right_char)
print(results) ``` This will output: ```python {
'found': ['John Doe', 'New York'], 'matched_found': 2,
'pattern_parameters': {
'left_character': "'", 'right_character': "'", 'regex_pattern':
"'(.+?)'", 'text_string': "Hello, my name is 'John Doe' and I
live in 'New York'."
}
}
Source code in dsg_lib/common_functions/patterns.py
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 |
|
Reference¶
dsg_lib.common_functions.calendar_functions
¶
This module provides two main functions to convert between month numbers and their corresponding names.
Functions:
Name | Description |
---|---|
|
int) -> str: Converts an integer month number to its corresponding month name. Args: month (int): An integer between 1 and 12 representing the month number. Returns: str: The full name of the month corresponding to the input month number. If the input is not within the range of 1-12, returns "Invalid month number". If the input is not an integer, returns "Invalid input, integer is required". |
|
str) -> int: Converts a month name to its corresponding month number. Args: month_name (str): A string containing the full name of a month. Returns: int: The month number corresponding to the input month name. If the input is not a valid month name, returns -1. If the input is not a string, returns "Invalid input, string is required". |
Example:
from dsg_lib.common_functions.calendar_functions import get_month,
get_month_number print(get_month(1))
# Outputs: 'January'
print(get_month_number('January'))
# Outputs: 1
This module is part of the dsg_lib package and is used for handling and converting between month numbers and names.
Author: Mike Ryan Date: 2024/05/16 License: MIT
get_month(month)
¶
Converts an integer month number to its corresponding month name.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
month
|
|
An integer or integer-like float between 1 and 12 |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
|
The full name of the month corresponding to the input month number. If the input is not within the range of 1-12, returns "Invalid month number". If the input is not an integer or integer-like float, returns "Invalid input, integer is required". |
Source code in dsg_lib/common_functions/calendar_functions.py
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
|
get_month_number(month_name)
¶
Converts a month name to its corresponding month number.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
month_name
|
|
A string containing the full name of a month. |
required |
Returns:
Name | Type | Description |
---|---|---|
int |
|
The month number corresponding to the input month name. If the input is not a valid month name or not a string, returns -1. |
Source code in dsg_lib/common_functions/calendar_functions.py
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
|
FastAPI Functions
Reference¶
dsg_lib.fastapi_functions.http_codes
¶
http_codes.py
This module provides a dictionary of HTTP status codes and their descriptions.
The dictionary ALL_HTTP_CODES
contains the HTTP status codes as keys. Each key
maps to another dictionary that contains a description of the status code, an
extended description, and a link to its documentation on the Mozilla Developer Network (MDN).
Example:
from dsg_lib.fastapi_functions import http_codes
# Get the description, extended description, and link for HTTP status code 200
status_200 = http_codes.ALL_HTTP_CODES[200]
print(status_200)
# {'description': 'OK', 'extended_description': 'The request has succeeded', 'link': 'https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200'}
Attributes:
Name | Type | Description |
---|---|---|
|
|
A dictionary of HTTP status codes. Each key is an |
Author: Mike Ryan Date: 2024/05/16 License: MIT
DELETE_CODES = generate_code_dict(common_codes + [202, 204, 205, 409])
module-attribute
¶
DELETE_CODES is a dictionary of HTTP status codes for DELETE requests. It includes all the common codes, plus some additional codes that are specific to DELETE requests.
Example:
from dsg_lib.fastapi_functions import http_codes
# Print the dictionary of HTTP status codes for DELETE requests
print(http_codes.DELETE_CODES)
GET_CODES = generate_code_dict(common_codes + [206, 304, 307, 410, 502])
module-attribute
¶
GET_CODES is a dictionary of HTTP status codes for GET requests. It includes all the common codes, plus some additional codes that are specific to GET requests.
Example:
from dsg_lib.fastapi_functions import http_codes
# Print the dictionary of HTTP status codes for GET requests
print(http_codes.GET_CODES)
PATCH_CODES = generate_code_dict(common_codes + [202, 204, 206, 409, 412, 413])
module-attribute
¶
PATCH_CODES is a dictionary of HTTP status codes for PATCH requests. It includes all the common codes, plus some additional codes that are specific to PATCH requests.
Example:
from dsg_lib.fastapi_functions import http_codes
# Print the dictionary of HTTP status codes for PATCH requests
print(http_codes.PATCH_CODES)
POST_CODES = generate_code_dict(common_codes + [201, 202, 205, 307, 409, 413, 415])
module-attribute
¶
POST_CODES is a dictionary of HTTP status codes for POST requests. It includes all the common codes, plus some additional codes that are specific to POST requests.
Example:
from dsg_lib.fastapi_functions import http_codes
# Print the dictionary of HTTP status codes for POST requests
print(http_codes.POST_CODES)
PUT_CODES = generate_code_dict(common_codes + [202, 204, 206, 409, 412, 413])
module-attribute
¶
PUT_CODES is a dictionary of HTTP status codes for PUT requests. It includes all the common codes, plus some additional codes that are specific to PUT requests.
Example:
from dsg_lib.fastapi_functions import http_codes
# Print the dictionary of HTTP status codes for PUT requests
print(http_codes.PUT_CODES)
generate_code_dict(codes, description_only=False)
¶
Generate a dictionary of specific HTTP error codes from the http_codes dictionary.
This function takes a list of HTTP status codes and an optional boolean flag. If the flag is True, the function returns a dictionary where each key is an HTTP status code from the input list and each value is the corresponding description from the ALL_HTTP_CODES dictionary. If the flag is False, the function returns a dictionary where each key is an HTTP status code from the input list and each value is the corresponding dictionary from the ALL_HTTP_CODES dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
codes
|
|
A list of HTTP status codes. |
required |
description_only
|
|
If True, only the description of the codes will be returned. |
False
|
Returns:
Name | Type | Description |
---|---|---|
dict |
|
A dictionary where each key is an HTTP error code from the input |
|
list and each value depends on the description_only parameter. If |
|
|
description_only is True, the value is the description string. If |
|
|
description_only is False, the value is a dictionary with keys |
|
|
'description', 'extended_description', and 'link'. |
Example:
from dsg_lib.fastapi_functions import http_codes
# Generate a dictionary for HTTP status codes 200 and 404
status_dict = http_codes.generate_code_dict([200, 404])
print(status_dict)
# {200: {'description': 'OK', 'extended_description': 'The request has succeeded', 'link': 'https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200'},
# 404: {'description': 'Not Found', 'extended_description': 'The requested resource could not be found', 'link': 'https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404'}}
# Generate a dictionary for HTTP status codes 200 and 404 with only descriptions
status_dict = http_codes.generate_code_dict([200, 404], description_only=True)
print(status_dict) # {200: 'OK', 404: 'Not Found'}
Source code in dsg_lib/fastapi_functions/http_codes.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
|
dsg_lib.fastapi_functions._all_codes
¶
This module contains a dictionary mapping HTTP status codes to their descriptions, extended descriptions, and links to their documentation.
Each key in this dictionary is an HTTP status code, and each value is another dictionary with keys 'description', 'extended_description', and 'link'.
The 'description' key maps to a brief string that describes the HTTP status code. The 'extended_description' key maps to a more detailed explanation of the status code. The 'link' key maps to a string that is a link to the documentation for the HTTP status code.
Example
from dsg_lib.fastapi_functions.http_codes import ALL_HTTP_CODES
# Get the dictionary for HTTP status code 200
status_200 = ALL_HTTP_CODES[200]
print(status_200)
# Output: {'description': 'OK', 'extended_description': 'The request has succeeded.', 'link': 'https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200'}
# Get the description for HTTP status code 404
description_404 = ALL_HTTP_CODES[404]['description']
print(description_404) # Output: 'Not Found'
# Get the extended description for HTTP status code 200
extended_description_200 = ALL_HTTP_CODES[200]['extended_description']
print(extended_description_200) # Output: 'The request has succeeded.'
# Get the link to the documentation for HTTP status code 500
link_500 = ALL_HTTP_CODES[500]['link']
print(link_500) # Output: 'https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500'
Author: Mike Ryan Date: 2024/05/16 License: MIT
Reference¶
dsg_lib.fastapi_functions.system_health_endpoints
¶
This module provides a configurable health endpoint for a FastAPI application. It includes the following routes:
-
/api/health/status
: Returns the status of the application. If the application is running, it will return{"status": "UP"}
. This endpoint can be enabled or disabled using the configuration. -
/api/health/uptime
: Returns the uptime of the application in a dictionary with the keys "Days", "Hours", "Minutes", and "Seconds". The uptime is calculated from the time the application was started. This endpoint can be enabled or disabled using the configuration. -
/api/health/heapdump
: Returns a heap dump of the application. The heap dump is a list of dictionaries, each representing a line of code. Each dictionary includes the filename, line number, size of memory consumed, and the number of times the line is referenced. This endpoint can be enabled or disabled using the configuration.
The module uses the FastAPI
, time
, tracemalloc
, loguru
, packaging
, and
dsg_lib.fastapi.http_codes
modules.
Functions:
Name | Description |
---|---|
|
dict) -> FastAPI.APIRouter: Creates a FastAPI router with health endpoints based on the provided configuration. |
Example
from FastAPI import FastAPI
from dsg_lib.fastapi_functions import
system_health_endpoints
app = FastAPI()
# User configuration
config = {
"enable_status_endpoint": True,
"enable_uptime_endpoint": False,
"enable_heapdump_endpoint": True,
}
# Health router
health_router =
system_health_endpoints.create_health_router(config)
app.include_router(health_router, prefix="/api/health",
tags=["system-health"])
# Get the status of the application
response = client.get("/api/health/status")
print(response.json()) # {"status": "UP"}
# Get the uptime of the application response =
client.get("/api/health/uptime")
print(response.json())
# {"uptime": {"Days": 0, "Hours": 0, "Minutes": 1, "Seconds": 42.17}}
# Get the heap dump of the application response =
client.get("/api/health/heapdump")
print(response.json())
# {"memory_use":{"current": "123456", "peak": "789012"}, "heap_dump": [{"filename": "main.py", "lineno": 10, "size": 1234, "count": 1}, ...]}
Author: Mike Ryan Date: 2024/05/16 License: MIT
create_health_router(config)
¶
Create a health router with the following endpoints:
-
/status
: Returns the status of the application. This endpoint can be enabled or disabled using theenable_status_endpoint
key in the configuration. -
/uptime
: Returns the uptime of the application. This endpoint can be enabled or disabled using theenable_uptime_endpoint
key in the configuration. -
/heapdump
: Returns a heap dump of the application. This endpoint can be enabled or disabled using theenable_heapdump_endpoint
key in the configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config
|
|
A dictionary with the configuration for the endpoints. |
required |
Returns:
Name | Type | Description |
---|---|---|
APIRouter |
A FastAPI router with the configured endpoints. |
Example
from FastAPI import FastAPI
from dsg_lib.fastapi_functions import
system_health_endpoints
app = FastAPI()
# User configuration
config = {
"enable_status_endpoint": True,
"enable_uptime_endpoint": False,
"enable_heapdump_endpoint": True,
}
# Health router
health_router =
system_health_endpoints.create_health_router(config)
app.include_router(health_router, prefix="/api/health",
tags=["system-health"])
# Get the status of the application
response = client.get("/api/health/status")
print(response.json()) # {"status": "UP"}
# Get the uptime of the application response =
client.get("/api/health/uptime")
print(response.json())
# {"uptime": {"Days": 0, "Hours": 0, "Minutes": 1, "Seconds": 42.17}}
# Get the heap dump of the application response =
client.get("/api/health/heapdump")
print(response.json())
# {"memory_use":{"current": "123456", "peak": "789012"}, "heap_dump": [{"filename": "main.py", "lineno": 10, "size": 1234, "count": 1}, ...]}
Source code in dsg_lib/fastapi_functions/system_health_endpoints.py
82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 |
|
Reference¶
dsg_lib.fastapi_functions.default_endpoints
¶
create_default_router(config)
¶
Creates a router with default endpoints, including a configurable robots.txt.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config
|
|
A list of dictionaries specifying which bots are allowed or disallowed. |
required |
Returns:
Name | Type | Description |
---|---|---|
APIRouter |
|
A FastAPI router with the default endpoints. |
Source code in dsg_lib/fastapi_functions/default_endpoints.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
|
Database Functions
Reference¶
dsg_lib.async_database_functions.base_schema
¶
This module defines the base schema for database models in the application.
The module uses SQLAlchemy as the ORM and provides a SchemaBase
class that all
other models should inherit from. The SchemaBase
class includes common columns
that are needed for most models like pkid
, date_created
, and date_updated
.
pkid
: A unique identifier for each record. It's a string representation of a UUID.date_created
: The date and time when a particular row was inserted into the table. It defaults to the current UTC time when the instance is created.date_updated
: The date and time when a particular row was last updated. It defaults to the current UTC time whenever the instance is updated.
To create a new database model, import this module and extend the SchemaBase
class.
Example:
from dsg_lib.async_database_functions import base_schema
class MyModel(base_schema.SchemaBaseSQLite):
# Define your model-specific columns here my_column =
base_schema.Column(base_schema.String(50))
Author: Mike Ryan Date: 2024/05/16 License: MIT
SchemaBaseCockroachDB
¶
This class provides a base schema that includes common columns for most models when using a CockroachDB database. CockroachDB uses the same syntax as PostgreSQL. All other models should inherit from this class.
Attributes:
Name | Type | Description |
---|---|---|
|
|
A unique identifier for each record. It's a string |
|
|
The date and time when a particular row was |
|
|
The date and time when a particular row was |
Example:
from dsg_lib.async_database_functions import base_schema
from sqlalchemy.orm import declarative_base
BASE = declarative_base()
class MyModel(base_schema.SchemaBaseCockroachDB, BASE):
# Define your model-specific columns here
my_column = base_schema.Column(base_schema.String(50))
Source code in dsg_lib/async_database_functions/base_schema.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 |
|
SchemaBaseFirebird
¶
This class provides a base schema that includes common columns for most models when using a Firebird database. All other models should inherit from this class.
Attributes:
Name | Type | Description |
---|---|---|
|
|
A unique identifier for each record. It's a string |
|
|
The date and time when a particular row was |
|
|
The date and time when a particular row was |
Example:
from dsg_lib.async_database_functions import base_schema
from sqlalchemy.orm import declarative_base
BASE = declarative_base()
class MyModel(base_schema.SchemaBaseFirebird, BASE):
# Define your model-specific columns here
my_column = base_schema.Column(base_schema.String(50))
Source code in dsg_lib/async_database_functions/base_schema.py
368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 |
|
SchemaBaseMSSQL
¶
This class provides a base schema that includes common columns for most models when using a Microsoft SQL Server database. All other models should inherit from this class.
Attributes:
Name | Type | Description |
---|---|---|
|
|
A unique identifier for each record. It's a string |
|
|
The date and time when a particular row was |
|
|
The date and time when a particular row was |
Example:
from dsg_lib.async_database_functions import base_schema
from sqlalchemy.orm import declarative_base
BASE = declarative_base()
class MyModel(base_schema.SchemaBaseMSSQL, BASE):
# Define your model-specific columns here
my_column = base_schema.Column(base_schema.String(50))
Source code in dsg_lib/async_database_functions/base_schema.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 |
|
SchemaBaseMySQL
¶
This class provides a base schema that includes common columns for most models when using a MySQL database. All other models should inherit from this class.
Attributes:
Name | Type | Description |
---|---|---|
|
|
A unique identifier for each record. It's a string |
|
|
The date and time when a particular row was |
|
|
The date and time when a particular row was |
Example:
from dsg_lib.async_database_functions import base_schema
from sqlalchemy.orm import declarative_base
BASE = declarative_base()
class MyModel(base_schema.SchemaBaseMySQL, BASE):
# Define your model-specific columns here
my_column = base_schema.Column(base_schema.String(50))
Source code in dsg_lib/async_database_functions/base_schema.py
191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 |
|
SchemaBaseOracle
¶
This class provides a base schema that includes common columns for most models when using an Oracle database. All other models should inherit from this class.
Attributes:
Name | Type | Description |
---|---|---|
|
|
A unique identifier for each record. It's a string |
|
|
The date and time when a particular row was |
|
|
The date and time when a particular row was |
Example:
from dsg_lib.async_database_functions import base_schema
from sqlalchemy.orm import declarative_base
BASE = declarative_base()
class MyModel(base_schema.SchemaBaseOracle, BASE):
# Define your model-specific columns here
my_column = base_schema.Column(base_schema.String(50))
Source code in dsg_lib/async_database_functions/base_schema.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 |
|
SchemaBasePostgres
¶
This class provides a base schema that includes common columns for most models when using a PostgreSQL database. All other models should inherit from this class.
Attributes:
Name | Type | Description |
---|---|---|
|
|
A unique identifier for each record. It's a string |
|
|
The date and time when a particular row was |
|
|
The date and time when a particular row was |
Example:
from dsg_lib.async_database_functions import base_schema
from sqlalchemy.orm import declarative_base
BASE = declarative_base()
class MyModel(base_schema.SchemaBasePostgres, BASE):
# Define your model-specific columns here
my_column = base_schema.Column(base_schema.String(50))
Source code in dsg_lib/async_database_functions/base_schema.py
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 |
|
SchemaBaseSQLite
¶
This class provides a base schema that includes common columns for most models. All other models should inherit from this class.
Attributes:
Name | Type | Description |
---|---|---|
|
|
A unique identifier for each record. It's a string |
|
|
The date and time when a particular row was |
|
|
The date and time when a particular row was |
Example:
from dsg_lib.async_database_functions import base_schema
from sqlalchemy.orm import declarative_base
BASE = declarative_base()
class MyModel(base_schema.SchemaBase, BASE):
# Define your model-specific columns here
my_column = base_schema.Column(base_schema.String(50))
Source code in dsg_lib/async_database_functions/base_schema.py
74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
|
SchemaBaseSybase
¶
This class provides a base schema that includes common columns for most models when using a Sybase database. All other models should inherit from this class.
Attributes:
Name | Type | Description |
---|---|---|
|
|
A unique identifier for each record. It's a string |
|
|
The date and time when a particular row was |
|
|
The date and time when a particular row was |
Example:
from dsg_lib.async_database_functions import base_schema
from sqlalchemy.orm import declarative_base
BASE = declarative_base()
class MyModel(base_schema.SchemaBaseSybase, BASE):
# Define your model-specific columns here
my_column = base_schema.Column(base_schema.String(50))
Source code in dsg_lib/async_database_functions/base_schema.py
427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 |
|
Reference¶
dsg_lib.async_database_functions.database_config
¶
This module provides classes and functions for managing asynchronous database operations using SQLAlchemy and asyncio.
The main classes are DBConfig, which manages the database configuration and creates a SQLAlchemy engine and a MetaData instance, and AsyncDatabase, which uses an instance of DBConfig to perform asynchronous database operations.
The module also provides a function, import_sqlalchemy, which tries to import SQLAlchemy and its components, and raises an ImportError if SQLAlchemy is not installed or if the installed version is not compatible.
The module uses the logger from the dsg_lib
for logging, and the time
module
for working with times. It also uses the contextlib
module for creating
context managers, and the typing
module for type hinting.
The BASE
variable is a base class for declarative database models. It is
created using the declarative_base
function from sqlalchemy.orm
.
This module is part of the dsg_lib
package, which provides utilities for
working with databases in Python.
Example:
from dsg_lib.async_database_functions import database_config
# Define your database configuration config = {
"database_uri": "postgresql+asyncpg://user:password@localhost/dbname",
"echo": True, "future": True, "pool_pre_ping": True, "pool_size": 5,
"max_overflow": 10, "pool_recycle": 3600, "pool_timeout": 30,
}
# Create a DBConfig instance db_config = database_config.DBConfig(config)
# Use the DBConfig instance to get a database session async with
db_config.get_db_session() as session:
# Perform your database operations here pass
Author: Mike Ryan Date: 2024/05/16 License: MIT
DBConfig
¶
A class used to manage the database configuration and create a SQLAlchemy engine.
Attributes:
Name | Type | Description |
---|---|---|
|
|
A dictionary containing the database configuration |
|
engine (Engine
|
The SQLAlchemy engine created with the |
|
URI from the config. metadata (MetaData
|
The SQLAlchemy |
Create Engine Support Functions by Database Type Confirmed by testing [SQLITE, PostrgeSQL] To Be Tested [MySQL, Oracle, MSSQL] and should be considered experimental ------- Option SQLite PostgreSQL MySQL Oracle MSSQL echo Yes Yes Yes Yes Yes future Yes Yes Yes Yes Yes pool_pre_ping Yes Yes Yes Yes Yes pool_size No Yes Yes Yes Yes max_overflow No Yes Yes Yes Yes pool_recycle Yes Yes Yes Yes Yes pool_timeout No Yes Yes Yes Yes
Example:
from dsg_lib.async_database_functions import database_config
# Define your database configuration config = {
"database_uri": "postgresql+asyncpg://user:password@localhost/dbname",
"echo": True, "future": True, "pool_pre_ping": True, "pool_size": 5,
"max_overflow": 10, "pool_recycle": 3600, "pool_timeout": 30,
}
# Create a DBConfig instance db_config = database_config.DBConfig(config)
# Use the DBConfig instance to get a database session async with
db_config.get_db_session() as session:
# Perform your database operations here pass
Source code in dsg_lib/async_database_functions/database_config.py
80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 |
|
__init__(config)
¶
Initializes the DBConfig instance with the given database configuration.
The configuration should be a dictionary with the following keys: -
"database_uri": The URI for the database. - "echo": If True, the engine
will log all statements as well as a repr()
of their parameter lists
to the engines logger, which defaults to sys.stdout. - "future": If
True, use the future version of SQLAlchemy, which supports asyncio. -
"pool_pre_ping": If True, the pool will test the connection for liveness
upon each checkout. - "pool_size": The size of the connection pool to be
maintained. - "max_overflow": The number of connections that can be
opened above the pool_size
setting, when all other connections are in
use. - "pool_recycle": The number of seconds after which a connection is
automatically recycled. This is required for MySQL, which removes
connections after 8 hours idle by default. - "pool_timeout": The number
of seconds to wait before giving up on getting a connection from the
pool.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config
|
|
A dictionary containing the database configuration |
required |
Raises:
Type | Description |
---|---|
|
If there are unsupported parameters for the database |
Example:
from dsg_lib.async_database_functions import database_config
# Define your database configuration config = {
"database_uri":
"postgresql+asyncpg://user:password@localhost/dbname", "echo": True,
"future": True, "pool_pre_ping": True, "pool_size": 5,
"max_overflow": 10, "pool_recycle": 3600, "pool_timeout": 30,
}
# Create a DBConfig instance db_config =
database_config.DBConfig(config)
Source code in dsg_lib/async_database_functions/database_config.py
137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 |
|
get_db_session()
async
¶
This method returns a context manager that provides a new database session.
The session is created using the SQLAlchemy engine from the DBConfig instance, and it does not expire on commit. The session is of type AsyncSession.
This method should be used with the async with
statement.
Yields:
Name | Type | Description |
---|---|---|
AsyncSession |
A new SQLAlchemy asynchronous session. |
Raises:
Type | Description |
---|---|
|
If a database error occurs. |
Example:
from dsg_lib.async_database_functions import database_config
# Define your database configuration config = {
"database_uri":
"postgresql+asyncpg://user:password@localhost/dbname", "echo": True,
"future": True, "pool_pre_ping": True, "pool_size": 5,
"max_overflow": 10, "pool_recycle": 3600, "pool_timeout": 30,
}
# Create a DBConfig instance db_config =
database_config.DBConfig(config)
# Use the DBConfig instance to get a database session async with
db_config.get_db_session() as session:
# Perform your database operations here pass
Source code in dsg_lib/async_database_functions/database_config.py
203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 |
|
Reference¶
dsg_lib.async_database_functions.async_database
¶
async_database.py
This module provides classes for managing asynchronous database operations using SQLAlchemy and asyncio.
Classes:
Name | Description |
---|---|
|
Initializes and manages the database configuration including the creation of the SQLAlchemy engine and MetaData instance. |
|
Leverages a DBConfig instance to perform asynchronous database operations such as obtaining sessions, creating tables, and disconnecting from the database. |
Logging is performed using the logger from dsg_lib.common_functions.
Example
from dsg_lib.async_database_functions import (
async_database,
base_schema,
database_config,
database_operations,
)
# Define database configuration
config = {
"database_uri": "sqlite+aiosqlite:///:memory:?cache=shared",
"echo": False,
"future": True,
"pool_recycle": 3600,
}
# Create the configuration instance
db_config = database_config.DBConfig(config)
# Instantiate AsyncDatabase with the given configuration
async_db = async_database.AsyncDatabase(db_config)
# Optionally, create a DatabaseOperations instance
db_ops = database_operations.DatabaseOperations(async_db)
Date Created
2024/05/16
Date Updated
2025/02/15 - docstring and comments updated
License
MIT
AsyncDatabase
¶
Manages asynchronous database operations.
This class provides methods to acquire database sessions, create tables asynchronously, and disconnect the database engine safely.
Attributes¶
db_config : DBConfig An instance of DBConfig containing the database configuration such as the engine. Base : Base The declarative base model used by SQLAlchemy to define database models.
Methods¶
get_db_session(): Returns a context manager that yields a new asynchronous database session. create_tables(): Asynchronously creates all tables as defined in the metadata. disconnect(): Asynchronously disconnects the database engine.
Source code in dsg_lib/async_database_functions/async_database.py
63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
|
__init__(db_config)
¶
Initialize AsyncDatabase with a database configuration.
Parameters¶
db_config : DBConfig An instance of DBConfig containing the necessary database configurations.
Source code in dsg_lib/async_database_functions/async_database.py
87 88 89 90 91 92 93 94 95 96 97 98 |
|
create_tables()
async
¶
Asynchronously create all tables defined in the metadata.
This method binds the engine to the Base metadata and runs the table creation in a synchronous manner within an asynchronous transaction.
Raises¶
Exception Propagates any exceptions encountered during table creation.
Source code in dsg_lib/async_database_functions/async_database.py
112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
|
disconnect()
async
¶
Asynchronously disconnect the database engine.
Closes all connections and disposes of the engine resources.
Raises¶
Exception Propagates any exceptions encountered during disconnection.
Source code in dsg_lib/async_database_functions/async_database.py
137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
|
get_db_session()
¶
Obtain a new asynchronous database session.
Returns¶
contextlib._GeneratorContextManager A context manager that yields a new database session.
Source code in dsg_lib/async_database_functions/async_database.py
100 101 102 103 104 105 106 107 108 109 110 |
|
Reference¶
Configuration Matrix¶
Create Engine Support Functions by Database Type Confirmed by testing [SQLITE, PostgreSQL] To Be Tested [MySQL, Oracle, MSSQL] and should be considered experimental.
Option | SQLite | PostgreSQL | MySQL | Oracle | MSSQL |
---|---|---|---|---|---|
echo | Yes | Yes | Yes | Yes | Yes |
future | Yes | Yes | Yes | Yes | Yes |
pool_pre_ping | Yes | Yes | Yes | Yes | Yes |
pool_size | No | Yes | Yes | Yes | Yes |
max_overflow | No | Yes | Yes | Yes | Yes |
pool_recycle | Yes | Yes | Yes | Yes | Yes |
pool_timeout | No | Yes | Yes | Yes | Yes |
dsg_lib.async_database_functions.database_operations
¶
This module provides the DatabaseOperations
class for performing CRUD operations on a database using SQLAlchemy's asynchronous session.
The DatabaseOperations
class includes the following methods:
- `execute_one`: Executes a single non-read SQL query asynchronously.
- `execute_many`: Executes multiple non-read SQL queries asynchronously within a single transaction.
- 'read_one_record': Retrieves a single record from the database based on the provided query.
- `read_query`: Executes a fetch query on the database and returns a list of records that match the query.
- `read_multi_query`: Executes multiple fetch queries on the database and returns a dictionary of results for each query.
- `count_query`: Counts the number of records that match a given query.
- `get_column_details`: Gets the details of the columns in a table.
- `get_primary_keys`: Gets the primary keys of a table.
- `get_table_names`: Gets the names of all tables in the database.
Deprecated Methods:
- `create_one`: [Deprecated] Use `execute_one` with an INSERT query instead.
- `create_many`: [Deprecated] Use `execute_many` with INSERT queries instead.
- `update_one`: [Deprecated] Use `execute_one` with an UPDATE query instead.
- `update_many`: [Deprecated] Use `execute_many` with UPDATE queries instead.
- `delete_one`: [Deprecated] Use `execute_one` with a DELETE query instead.
- `delete_many`: [Deprecated] Use `execute_many` with DELETE queries instead.
Each method is designed to handle errors correctly and provide a simple interface for performing database operations.
This module also imports the necessary SQLAlchemy and loguru modules, and the AsyncDatabase
class from the local async_database
module.
Author: Mike Ryan Date: 2024/11/29 License: MIT
DatabaseOperations
¶
This class provides methods for performing CRUD operations on a database using SQLAlchemy's asynchronous session.
The methods include:
execute_one
: Executes a single non-read SQL query asynchronously.execute_many
: Executes multiple non-read SQL queries asynchronously within a single transaction.read_one_record
: Retrieves a single record from the database based on the provided query.read_query
: Executes a fetch query on the database and returns a list of records that match the query.read_multi_query
: Executes multiple fetch queries on the database and returns a dictionary of results for each query.count_query
: Counts the number of records that match a given query.get_column_details
: Gets the details of the columns in a table.get_primary_keys
: Gets the primary keys of a table.get_table_names
: Gets the names of all tables in the database.
Deprecated Methods:
- create_one
: [Deprecated] Use execute_one
with an INSERT query instead.
- create_many
: [Deprecated] Use execute_many
with INSERT queries instead.
- update_one
: [Deprecated] Use execute_one
with an UPDATE query instead.
- delete_one
: [Deprecated] Use execute_one
with a DELETE query instead.
- delete_many
: [Deprecated] Use execute_many
with DELETE queries instead.
Examples:
from sqlalchemy import insert, select
from dsg_lib.async_database_functions import (
async_database,
base_schema,
database_config,
database_operations,
)
# Create a DBConfig instance
config = {
"database_uri": "sqlite+aiosqlite:///:memory:?cache=shared",
"echo": False,
"future": True,
"pool_recycle": 3600,
}
# create database configuration
db_config = database_config.DBConfig(config)
# Create an AsyncDatabase instance
async_db = async_database.AsyncDatabase(db_config)
# Create a DatabaseOperations instance
db_ops = database_operations.DatabaseOperations(async_db)
# create one record
query = insert(User).values(name='John Doe')
result = await db_ops.execute_one(query)
# read one record
query = select(User).where(User.name == 'John Doe')
record = await db_ops.read_query(query)
Source code in dsg_lib/async_database_functions/database_operations.py
130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 |
|
__init__(async_db)
¶
Initializes a new instance of the DatabaseOperations class.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
async_db
|
|
An instance of the |
required |
Example:
from dsg_lib.async_database_functions import (
async_database,
base_schema,
database_config,
database_operations,
)
config = {
# "database_uri": "postgresql+asyncpg://postgres:postgres@postgresdb/postgres",
"database_uri": "sqlite+aiosqlite:///:memory:?cache=shared",
"echo": False,
"future": True,
# "pool_pre_ping": True,
# "pool_size": 10,
# "max_overflow": 10,
"pool_recycle": 3600,
# "pool_timeout": 30,
}
db_config = database_config.DBConfig(config)
async_db = async_database.AsyncDatabase(db_config)
db_ops = database_operations.DatabaseOperations(async_db)
Source code in dsg_lib/async_database_functions/database_operations.py
187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 |
|
count_query(query)
async
¶
Executes a count query on the database and returns the number of records that match the query.
This asynchronous method accepts a SQLAlchemy Select
query object and
returns the count of records that match the query. This is particularly
useful for getting the total number of records that satisfy certain
conditions without actually fetching the records themselves.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
query
|
|
A SQLAlchemy |
required |
Returns:
Name | Type | Description |
---|---|---|
int |
The number of records that match the query. |
Raises:
Type | Description |
---|---|
|
If any error occurs during the execution of the query. |
Example
from dsg_lib.async_database_functions import (
async_database,
base_schema,
database_config,
database_operations,
)
# Create a DBConfig instance
config = {
# "database_uri": "postgresql+asyncpg://postgres:postgres@postgresdb/postgres",
"database_uri": "sqlite+aiosqlite:///:memory:?cache=shared",
"echo": False,
"future": True,
# "pool_pre_ping": True,
# "pool_size": 10,
# "max_overflow": 10,
"pool_recycle": 3600,
# "pool_timeout": 30,
}
# create database configuration
db_config = database_config.DBConfig(config)
# Create an AsyncDatabase instance
async_db = async_database.AsyncDatabase(db_config)
# Create a DatabaseOperations instance
db_ops = database_operations.DatabaseOperations(async_db)
# count query
count = await db_ops.count_query(select(User).where(User.age > 30))
Source code in dsg_lib/async_database_functions/database_operations.py
472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 |
|
create_many(records)
async
¶
This method is deprecated. Use execute_many
with INSERT queries instead.
Adds multiple records to the database.
This asynchronous method accepts a list of record objects and adds them to the database. If the operation is successful, it returns the added records. This method is useful for bulk inserting multiple rows into a database table efficiently.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
records
|
|
A list of instances of the SQLAlchemy |
required |
Returns:
Type | Description |
---|---|
list[Base]: A list of instances of the records that were added to |
|
the database. |
Raises:
Type | Description |
---|---|
|
If any error occurs during the database operation. |
Example
from dsg_lib.async_database_functions import (
async_database,
base_schema,
database_config,
database_operations,
)
# Create a DBConfig instance
config = {
# "database_uri": "postgresql+asyncpg://postgres:postgres@postgresdb/postgres",
"database_uri": "sqlite+aiosqlite:///:memory:?cache=shared",
"echo": False,
"future": True,
# "pool_pre_ping": True,
# "pool_size": 10,
# "max_overflow": 10,
"pool_recycle": 3600,
# "pool_timeout": 30,
}
# create database configuration
db_config = database_config.DBConfig(config)
# Create an AsyncDatabase instance
async_db = async_database.AsyncDatabase(db_config)
# Create a DatabaseOperations instance
db_ops = database_operations.DatabaseOperations(async_db)
# create many records
records = await db_ops.create_many([User(name='John Doe'), User(name='Jane Doe')])
Source code in dsg_lib/async_database_functions/database_operations.py
923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 |
|
create_one(record)
async
¶
This method is deprecated. Use execute_one
with an INSERT query instead.
Adds a single record to the database.
This asynchronous method accepts a record object and adds it to the database. If the operation is successful, it returns the added record. The method is useful for inserting a new row into a database table.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
record
|
|
An instance of the SQLAlchemy declarative base class |
required |
Returns:
Name | Type | Description |
---|---|---|
Base |
The instance of the record that was added to the database. |
Raises:
Type | Description |
---|---|
|
If any error occurs during the database operation. |
Example
from dsg_lib.async_database_functions import (
async_database,
base_schema,
database_config,
database_operations,
)
# Create a DBConfig instance
config = {
# "database_uri": "postgresql+asyncpg://postgres:postgres@postgresdb/postgres",
"database_uri": "sqlite+aiosqlite:///:memory:?cache=shared",
"echo": False,
"future": True,
# "pool_pre_ping": True,
# "pool_size": 10,
# "max_overflow": 10,
"pool_recycle": 3600,
# "pool_timeout": 30,
}
# create database configuration
db_config = database_config.DBConfig(config)
# Create an AsyncDatabase instance
async_db = async_database.AsyncDatabase(db_config)
# Create a DatabaseOperations instance
db_ops = database_operations.DatabaseOperations(async_db)
# create one record
record = await db_ops.create_one(User(name='John Doe'))
Source code in dsg_lib/async_database_functions/database_operations.py
849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 |
|
delete_many(table, id_column_name='pkid', id_values=None)
async
¶
This method is deprecated. Use execute_many
with a DELETE query instead.
Deletes multiple records from the specified table in the database.
This method takes a table, an optional id column name, and a list of id values. It deletes the records in the table where the id column matches any of the id values in the list.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
|
The table from which to delete records. |
required |
id_column_name
|
|
The name of the id column in the table. Defaults to "pkid". |
'pkid'
|
id_values
|
|
A list of id values for the records to delete. Defaults to []. |
None
|
Returns:
Name | Type | Description |
---|---|---|
int |
|
The number of records deleted from the table. |
Example:
from dsg_lib.async_database_functions import (
async_database,
base_schema,
database_config,
database_operations,
)
# Create a DBConfig instance
config = {
"database_uri": "sqlite+aiosqlite:///:memory:?cache=shared",
"echo": False,
"future": True,
"pool_recycle": 3600,
}
# create database configuration
db_config = database_config.DBConfig(config)
# Create an AsyncDatabase instance
async_db = async_database.AsyncDatabase(db_config)
# Create a DatabaseOperations instance
db_ops = database_operations.DatabaseOperations(async_db)
# Delete multiple records
deleted_count = await db_ops.delete_many(User, 'id', [1, 2, 3])
print(f"Deleted {deleted_count} records.")
Source code in dsg_lib/async_database_functions/database_operations.py
1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 |
|
delete_one(table, record_id)
async
¶
This method is deprecated. Use execute_one
with a DELETE query instead.
Deletes a single record from the database based on the provided table and record ID.
This asynchronous method accepts a SQLAlchemy Table
object and a
record ID. It attempts to delete the record with the given ID from the
specified table. If the record is successfully deleted, it returns a
success message. If no record with the given ID is found, it returns an
error message.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
|
An instance of the SQLAlchemy |
required |
deleted.
|
record_id (str
|
The ID of the record to be deleted. |
required |
Returns:
Name | Type | Description |
---|---|---|
dict |
A dictionary containing a success message if the record was |
|
deleted successfully, or an error message if the record was not |
||
found or an exception occurred. |
Raises:
Type | Description |
---|---|
|
If any error occurs during the delete operation. |
Example
from dsg_lib.async_database_functions import (
async_database,
base_schema,
database_config,
database_operations,
)
# Create a DBConfig instance
config = {
# "database_uri": "postgresql+asyncpg://postgres:postgres@postgresdb/postgres",
"database_uri": "sqlite+aiosqlite:///:memory:?cache=shared",
"echo": False,
"future": True,
# "pool_pre_ping": True,
# "pool_size": 10,
# "max_overflow": 10,
"pool_recycle": 3600,
# "pool_timeout": 30,
}
# create database configuration
db_config = database_config.DBConfig(config)
# Create an AsyncDatabase instance
async_db = async_database.AsyncDatabase(db_config)
# Create a DatabaseOperations instance
db_ops = database_operations.DatabaseOperations(async_db)
# delete one record
result = await db_ops.delete_one(User, 1)
Source code in dsg_lib/async_database_functions/database_operations.py
1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 |
|
execute_many(queries)
async
¶
Executes multiple non-read SQL queries asynchronously within a single transaction.
This method executes a list of SQL statements that modify the database, such as multiple INSERTs, UPDATEs, or DELETEs. All queries are executed within the same transaction, which is committed if all succeed, or rolled back if any fail.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
queries
|
|
A list of tuples, each containing
a query and an optional dictionary of parameter values. Each tuple should be of the form
|
required |
Returns:
Type | Description |
---|---|
|
Union[str, Dict[str, str]]: "complete" if all queries executed and committed successfully, |
|
or an error dictionary if an exception occurred. |
Example
from sqlalchemy import insert
queries = [
(insert(User), {'name': 'User1'}),
(insert(User), {'name': 'User2'}),
(insert(User), {'name': 'User3'}),
]
result = await db_ops.execute_many(queries)
Source code in dsg_lib/async_database_functions/database_operations.py
802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 |
|
execute_one(query, values=None)
async
¶
Executes a single non-read SQL query asynchronously.
This method executes a single SQL statement that modifies the database, such as INSERT, UPDATE, or DELETE. It handles the execution within an asynchronous session and commits the transaction upon success.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
query
|
|
An SQLAlchemy query object representing the SQL statement to execute. |
required |
values
|
|
A dictionary of parameter values to bind to the query. Defaults to None. |
None
|
Returns:
Type | Description |
---|---|
|
Union[str, Dict[str, str]]: "complete" if the query executed and committed successfully, |
|
or an error dictionary if an exception occurred. |
Example
from sqlalchemy import insert
query = insert(User).values(name='John Doe')
result = await db_ops.execute_one(query)
Source code in dsg_lib/async_database_functions/database_operations.py
763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 |
|
get_columns_details(table)
async
¶
Retrieves the details of the columns of a given table.
This asynchronous method accepts a table object and returns a dictionary. Each key in the dictionary is a column name from the table, and the corresponding value is another dictionary containing details about that column, such as type, if it's nullable, if it's a primary key, if it's unique, its autoincrement status, and its default value.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
|
An instance of the SQLAlchemy Table class |
required |
Returns:
Name | Type | Description |
---|---|---|
dict |
A dictionary where each key is a column name, and each value |
|
is a dictionary with the column's details. |
Raises:
Type | Description |
---|---|
|
If any error occurs during the database operation. |
Example:
from sqlalchemy import Table, MetaData, Column,
Integer, String from dsg_lib.async_database_functions import module_name metadata = MetaData()
my_table = Table('my_table', metadata,
Column('id', Integer, primary_key=True), Column('name',
String))
from dsg_lib.async_database_functions import (
async_database,
base_schema,
database_config,
database_operations,
)
# Create a DBConfig instance
config = {
# "database_uri": "postgresql+asyncpg://postgres:postgres@postgresdb/postgres",
"database_uri": "sqlite+aiosqlite:///:memory:?cache=shared",
"echo": False,
"future": True,
# "pool_pre_ping": True,
# "pool_size": 10,
# "max_overflow": 10,
"pool_recycle": 3600,
# "pool_timeout": 30,
}
# create database configuration
db_config = database_config.DBConfig(config)
# Create an AsyncDatabase instance
async_db = async_database.AsyncDatabase(db_config)
# Create a DatabaseOperations instance
db_ops = database_operations.DatabaseOperations(async_db)
# get columns details
columns = await db_ops.get_columns_details(my_table)
Source code in dsg_lib/async_database_functions/database_operations.py
234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 |
|
get_primary_keys(table)
async
¶
Retrieves the primary keys of a given table.
This asynchronous method accepts a table object and returns a list containing the names of its primary keys. It is useful for understanding the structure of the table and for operations that require knowledge of the primary keys.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
|
An instance of the SQLAlchemy Table class |
required |
Returns:
Name | Type | Description |
---|---|---|
list |
A list containing the names of the primary keys of the table. |
Raises:
Type | Description |
---|---|
|
If any error occurs during the database operation. |
Example
from sqlalchemy import Table, MetaData, Column, Integer,
String from dsg_lib.async_database_functions import module_name metadata = MetaData()
my_table = Table('my_table', metadata,
Column('id', Integer, primary_key=True),
Column('name', String, primary_key=True))
from dsg_lib.async_database_functions import (
async_database,
base_schema,
database_config,
database_operations,
)
# Create a DBConfig instance
config = {
# "database_uri": "postgresql+asyncpg://postgres:postgres@postgresdb/postgres",
"database_uri": "sqlite+aiosqlite:///:memory:?cache=shared",
"echo": False,
"future": True,
# "pool_pre_ping": True,
# "pool_size": 10,
# "max_overflow": 10,
"pool_recycle": 3600,
# "pool_timeout": 30,
}
# create database configuration
db_config = database_config.DBConfig(config)
# Create an AsyncDatabase instance
async_db = async_database.AsyncDatabase(db_config)
# Create a DatabaseOperations instance
db_ops = database_operations.DatabaseOperations(async_db)
# get primary keys
primary_keys = await db_ops.get_primary_keys(my_table)
Source code in dsg_lib/async_database_functions/database_operations.py
331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 |
|
get_table_names()
async
¶
Retrieves the names of all tables in the database.
This asynchronous method returns a list containing the names of all tables in the database. It is useful for database introspection, allowing the user to know which tables are available in the current database context.
Returns:
Name | Type | Description |
---|---|---|
list |
A list containing the names of all tables in the database. |
Raises:
Type | Description |
---|---|
|
If any error occurs during the database operation. |
Example
from dsg_lib.async_database_functions import (
async_database,
base_schema,
database_config,
database_operations,
)
# Create a DBConfig instance
config = {
# "database_uri": "postgresql+asyncpg://postgres:postgres@postgresdb/postgres",
"database_uri": "sqlite+aiosqlite:///:memory:?cache=shared",
"echo": False,
"future": True,
# "pool_pre_ping": True,
# "pool_size": 10,
# "max_overflow": 10,
"pool_recycle": 3600,
# "pool_timeout": 30,
}
# create database configuration
db_config = database_config.DBConfig(config)
# Create an AsyncDatabase instance
async_db = async_database.AsyncDatabase(db_config)
# Create a DatabaseOperations instance
db_ops = database_operations.DatabaseOperations(async_db)
# get table names
table_names = await db_ops.get_table_names()
Source code in dsg_lib/async_database_functions/database_operations.py
406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 |
|
read_multi_query(queries)
async
¶
Executes multiple fetch queries asynchronously and returns a dictionary of results for each query.
This asynchronous method accepts a dictionary where each key is a query name (str)
and each value is a SQLAlchemy Select
query object. It executes each query within a single
database session and collects the results. The results are returned as a dictionary mapping
each query name to a list of records that match that query.
The function automatically determines the structure of each result set: - If the query returns a single column, the result will be a list of scalar values. - If the query returns multiple columns, the result will be a list of dictionaries mapping column names to values. - If the result row is an ORM object, it will be returned as-is.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
queries
|
|
A dictionary mapping query names to SQLAlchemy |
required |
Returns:
Type | Description |
---|---|
Dict[str, List[Any]]: A dictionary where each key is a query name and each value is a list of records |
|
(scalars, dictionaries, or ORM objects) that match the corresponding query. |
Raises:
Type | Description |
---|---|
|
If any error occurs during the execution of any query, the function logs the error and |
Example
from sqlalchemy import select
queries = {
"adults": select(User).where(User.age >= 18),
"minors": select(User).where(User.age < 18),
}
results = await db_ops.read_multi_query(queries)
# results["adults"] and results["minors"] will contain lists of records
Source code in dsg_lib/async_database_functions/database_operations.py
679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 |
|
read_one_record(query)
async
¶
Retrieves a single record from the database based on the provided query.
This asynchronous method accepts a SQL query object and returns the first record that matches the query. If no record matches the query, it returns None. This method is useful for fetching specific data when the expected result is a single record.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
query
|
|
An instance of the SQLAlchemy Select class, |
required |
Returns:
Name | Type | Description |
---|---|---|
Result |
The first record that matches the query or None if no record matches. |
Raises:
Type | Description |
---|---|
|
If any error occurs during the database operation. |
Example
from dsg_lib.async_database_functions import (
async_database,
base_schema,
database_config,
database_operations,
)
# Create a DBConfig instance
config = {
# "database_uri": "postgresql+asyncpg://postgres:postgres@postgresdb/postgres",
"database_uri": "sqlite+aiosqlite:///:memory:?cache=shared",
"echo": False,
"future": True,
# "pool_pre_ping": True,
# "pool_size": 10,
# "max_overflow": 10,
"pool_recycle": 3600,
# "pool_timeout": 30,
}
# create database configuration
db_config = database_config.DBConfig(config)
# Create an AsyncDatabase instance
async_db = async_database.AsyncDatabase(db_config)
# Create a DatabaseOperations instance
db_ops = database_operations.DatabaseOperations(async_db)
# read one record
record = await db_ops.read_one_record(select(User).where(User.name == 'John Doe'))
Source code in dsg_lib/async_database_functions/database_operations.py
547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 |
|
update_one(table, record_id, new_values)
async
¶
This method is deprecated. Use execute_one
with an UPDATE query instead.
Updates a single record in the database identified by its ID.
This asynchronous method takes a SQLAlchemy Table
object, a record ID,
and a dictionary of new values to update the record. It updates the
specified record in the given table with the new values. The method does
not allow updating certain fields, such as 'id' or 'date_created'.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
|
The SQLAlchemy |
required |
in
|
the database. record_id (str
|
The ID of the record to be |
required |
updated.
|
new_values (dict
|
A dictionary containing the fields to |
required |
Returns:
Name | Type | Description |
---|---|---|
Base |
The updated record if successful; otherwise, an error |
|
dictionary. |
Raises:
Type | Description |
---|---|
|
If any error occurs during the update operation. |
Example
from dsg_lib.async_database_functions import (
async_database,
base_schema,
database_config,
database_operations,
)
# Create a DBConfig instance
config = {
# "database_uri": "postgresql+asyncpg://postgres:postgres@postgresdb/postgres",
"database_uri": "sqlite+aiosqlite:///:memory:?cache=shared",
"echo": False,
"future": True,
# "pool_pre_ping": True,
# "pool_size": 10,
# "max_overflow": 10,
"pool_recycle": 3600,
# "pool_timeout": 30,
}
# create database configuration
db_config = database_config.DBConfig(config)
# Create an AsyncDatabase instance
async_db = async_database.AsyncDatabase(db_config)
# Create a DatabaseOperations instance
db_ops = database_operations.DatabaseOperations(async_db)
# update one record
record = await db_ops.update_one(User, 1, {'name': 'John Smith'})
Source code in dsg_lib/async_database_functions/database_operations.py
1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 |
|
handle_exceptions(ex)
¶
Handles exceptions for database operations.
This function checks the type of the exception, logs an appropriate error message, and returns a dictionary containing the error details.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ex
|
|
The exception to handle. |
required |
Returns:
Name | Type | Description |
---|---|---|
dict |
|
A dictionary containing the error details. The dictionary has two |
keys |
|
'error' and 'details'. |
Example:
from dsg_lib.async_database_functions import database_operations
try:
# Some database operation that might raise an exception pass
except Exception as ex:
error_details = database_operations.handle_exceptions(ex)
print(error_details)
Source code in dsg_lib/async_database_functions/database_operations.py
69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
|
Examples
log_example Example¶
Log Example Module¶
This module demonstrates advanced logging configurations and usage in Python. It integrates both the logging
module and loguru
for robust logging capabilities. The module also showcases multi-threading and multi-processing for concurrent execution, while logging messages and handling exceptions.
Features¶
- Logging Configuration: Configures logging with options for log rotation, retention, backtrace, and serialization.
- Exception Handling: Demonstrates exception handling with logging for
ZeroDivisionError
. - Concurrent Execution:
- Multi-threading: Executes tasks concurrently using threads.
- Multi-processing: Executes tasks concurrently using processes.
- Large Message Logging: Logs large messages repeatedly to test logging performance.
- Progress Tracking: Uses
tqdm
to display progress bars for threads and processes.
Functions¶
div_zero(x, y)
¶
Attempts to divide x
by y
and logs any ZeroDivisionError
encountered.
div_zero_two(x, y)
¶
Similar to div_zero
, attempts to divide x
by y
and logs any ZeroDivisionError
encountered.
log_big_string(lqty=100, size=256)
¶
Logs a large string multiple times, demonstrating both standard logging and loguru
logging.
worker(wqty=1000, lqty=100, size=256)
¶
Executes the log_big_string
function repeatedly, simulating a worker process or thread.
main(wqty, lqty, size, workers, thread_test, process_test)
¶
Main entry point for the module. Configures and starts either multi-threading or multi-processing based on the provided arguments.
Usage¶
Run the module directly to test its functionality. Example:
python log_example.py
You can customize the parameters for workers, logging quantity, and message size by modifying the main
function call in the __main__
block.
Dependencies¶
logging
loguru
multiprocessing
threading
secrets
tqdm
dsg_lib.common_functions
Notes¶
- Ensure the
dsg_lib
library is installed and accessible. - Adjust the logging configuration as needed for your application.
- Use the
process_test
orthread_test
flags to toggle between multi-processing and multi-threading.
License¶
This module is licensed under the MIT License.
# from loguru import logger
import logging
import logging as logger
import multiprocessing
import secrets
import threading
from tqdm import tqdm
from dsg_lib.common_functions import logging_config
# Configure logging as before
logging_config.config_log(
logging_directory="log",
log_name="log",
logging_level="DEBUG",
log_rotation="100 MB",
log_retention="10 days",
log_backtrace=True,
log_serializer=True,
log_diagnose=True,
# app_name='my_app',
# append_app_name=True,
intercept_standard_logging=True,
enqueue=True,
)
def div_zero(x: float, y: float) -> float | None:
"""
Safely divide x by y and log ZeroDivisionError if encountered.
Args:
x (float): Numerator.
y (float): Denominator.
Returns:
float | None: Quotient or None on error.
"""
try:
return x / y
except ZeroDivisionError as e:
logger.error(f"{e}") # log via loguru
logging.error(f"{e}") # log via standard logging
def div_zero_two(x: float, y: float) -> float | None:
"""
Mirror of div_zero demonstrating identical error handling.
"""
try:
return x / y
except ZeroDivisionError as e:
logger.error(f"{e}")
logging.error(f"{e}")
def log_big_string(lqty: int = 100, size: int = 256) -> None:
"""
Generate a large random string and log various messages repeatedly.
Args:
lqty (int): Number of log iterations.
size (int): Length of each random string.
"""
big_string = secrets.token_urlsafe(size) # create URL-safe token
for _ in range(lqty):
logging.debug(f"Lets make this a big message {big_string}") # standard debug
div_zero(1, 0) # trigger/log ZeroDivisionError
div_zero_two(1, 0)
# loguru messages
logger.debug("This is a loguru debug message")
logger.info("This is a loguru info message")
logger.warning("This is a loguru warning message")
logger.error("This is a loguru error message")
logger.critical("This is a loguru critical message")
# continued standard logging
logging.info("This is a standard logging info message")
logging.warning("This is a standard logging warning message")
logging.error("This is a standard logging error message")
logging.critical("This is a standard logging critical message")
def worker(wqty: int = 1000, lqty: int = 100, size: int = 256) -> None:
"""
Worker routine performing log_big_string in a progress loop.
Args:
wqty (int): Number of outer iterations.
lqty (int): Messages per iteration.
size (int): Random string length.
"""
for _ in tqdm(range(wqty), ascii=True, leave=True):
log_big_string(lqty=lqty, size=size)
def main(
wqty: int = 100,
lqty: int = 10,
size: int = 256,
workers: int = 16,
thread_test: bool = False,
process_test: bool = False,
) -> None:
"""
Configure and launch concurrent logging workers.
Args:
wqty (int): Iterations per worker.
lqty (int): Logs per iteration.
size (int): Random string size.
workers (int): Thread/process count.
thread_test (bool): Run threads if True.
process_test (bool): Run processes if True.
"""
if process_test:
processes = []
# Create worker processes
for _ in tqdm(range(workers), desc="Multi-Processing Start", leave=True):
p = multiprocessing.Process(
target=worker,
args=(
wqty,
lqty,
size,
),
)
processes.append(p)
p.start()
for p in tqdm((processes), desc="Multi-Processing Start", leave=False):
p.join(timeout=60) # Timeout after 60 seconds
if p.is_alive():
logger.error(f"Process {p.name} is hanging. Terminating.")
p.terminate()
p.join()
if thread_test:
threads = []
for _ in tqdm(
range(workers), desc="Threading Start", leave=True
): # Create worker threads
t = threading.Thread(
target=worker,
args=(
wqty,
lqty,
size,
),
)
threads.append(t)
t.start()
for t in tqdm(threads, desc="Threading Gather", leave=False):
t.join()
if __name__ == "__main__":
from time import time
start = time()
main(wqty=5, lqty=50, size=64, workers=8, thread_test=False, process_test=True)
print(f"Execution time: {time()-start:.2f} seconds")
cal_example Example¶
Overview¶
This module demonstrates the usage of the calendar_functions
module from the dsg_lib.common_functions
package.
It provides examples of how to work with months, both by their numeric representation and their names.
The module includes two main functions:
calendar_check_number
:- Iterates through a predefined list of month numbers (
month_list
) and uses theget_month
function fromcalendar_functions
to retrieve the corresponding month name. - It then prints the result for each number in the list.
-
Example:
- Input:
1
- Output:
"January"
- Input:
13
(invalid) - Output: Depends on the implementation of
get_month
(e.g.,"Invalid Month"
).
- Input:
-
calendar_check_name
: - Iterates through a predefined list of month names (
month_names
) and uses theget_month_number
function fromcalendar_functions
to retrieve the corresponding numeric representation of the month. - It then prints the result for each name in the list.
- Example:
- Input:
"january"
- Output:
1
- Input:
"bob"
(invalid) - Output: Depends on the implementation of
get_month_number
(e.g.,"Invalid Month Name"
).
- Input:
Features¶
- Validation of Inputs: The module demonstrates how to handle invalid inputs, such as:
- Numbers outside the valid range of months (1-12).
-
Invalid month names that do not correspond to any recognized month.
-
Testing and Debugging: This module can be used to test and validate the robustness of the
calendar_functions
module by providing a variety of valid and invalid inputs.
Usage¶
- Run the script directly to see the output of the two functions.
- Modify the
month_list
ormonth_names
variables to test with different inputs.
Dependencies¶
dsg_lib.common_functions.calendar_functions
:- This module must be available and contain the following functions:
get_month
: Accepts a numeric month (e.g.,1
) and returns the corresponding month name (e.g.,"January"
).get_month_number
: Accepts a month name (e.g.,"january"
) and returns the corresponding numeric representation (e.g.,1
).
Example Output¶
For calendar_check_number
:¶
If month_list = [0, 1, 2, 3, 13]
, the output might be:
Invalid Month
January
February
March
Invalid Month
For calendar_check_name
:¶
If month_names = ["january", "february", "bob"]
, the output might be:
1
2
Invalid Month Name
Notes¶
- Ensure that the
calendar_functions
module is correctly implemented and imported. - The behavior for invalid inputs depends on the implementation of
get_month
andget_month_number
.
License¶
This module is licensed under the MIT License.
from dsg_lib.common_functions import calendar_functions
from typing import List, Any
# List of month numbers to test, including invalid values (0, 13)
month_list: List[int] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
# List of month names to test, including an invalid value ("bob")
month_names: List[str] = [
"january",
"february",
"march",
"april",
"may",
"june",
"july",
"august",
"september",
"october",
"november",
"december",
"bob",
]
def calendar_check_number() -> None:
"""
Example: Demonstrates converting month numbers to month names.
Iterates through `month_list` and prints the result of get_month for each.
"""
for i in month_list:
month = calendar_functions.get_month(month=i)
print(month)
def calendar_check_name() -> None:
"""
Example: Demonstrates converting month names to month numbers.
Iterates through `month_names` and prints the result of get_month_number for each.
"""
for i in month_names:
month = calendar_functions.get_month_number(month_name=i)
print(month)
def calendar_check_float_and_invalid_types() -> None:
"""
Example: Tests get_month with float values and various invalid types.
Shows how the function handles non-integer and unexpected input types.
"""
print("\nTesting get_month with float and invalid types:")
test_values: List[Any] = [1.0, 12.0, 5.5, "3", None, [1], {"month": 2}]
for val in test_values:
print(f"Input: {val!r} -> Output: {calendar_functions.get_month(month=val)}")
def calendar_check_name_variants() -> None:
"""
Example: Tests get_month_number with name variants and invalid types.
Includes extra spaces, different cases, abbreviations, and non-string types.
"""
print("\nTesting get_month_number with name variants and invalid types:")
test_names: List[Any] = [
" January ", "FEBRUARY", "mar", "Apr", "may", "JUNE", "July", "august", "Sept", "oct", "nov", "december",
5, None, ["March"], {"month": "April"}
]
for name in test_names:
print(f"Input: {name!r} -> Output: {calendar_functions.get_month_number(month_name=name)}")
if __name__ == "__main__":
# Run all example checks to demonstrate library usage and edge case handling
calendar_check_number()
calendar_check_name()
calendar_check_float_and_invalid_types()
calendar_check_name_variants()
pattern_example Example¶
Pattern Example Module¶
This module demonstrates the usage of the pattern_between_two_char
function from the dsg_lib.common_functions.patterns
package. It provides examples of how to extract patterns between specified characters in a given text block.
Features¶
- ASCII_LIST: A comprehensive list of ASCII characters, which can be used for various text processing tasks.
- pattern_find: A utility function to find and pretty-print patterns between two specified characters in a text block.
- run_examples: A function that runs example use cases, including:
- Extracting patterns from a simple text block.
- Generating a large random text block and extracting patterns from it.
Usage¶
To run the examples, execute this script directly. The output will demonstrate how patterns are extracted from text blocks.
Functions¶
pattern_find(left_char: str, right_char: str, text_block: str)
¶
Finds and pretty-prints patterns between the specified left_char
and right_char
in the provided text_block
.
run_examples()
¶
Runs example use cases to showcase the functionality of the pattern_between_two_char
function.
Example Output¶
When running the script, you will see: 1. Patterns extracted from a predefined text block. 2. Patterns extracted from a randomly generated large text block.
License¶
This module is licensed under the MIT License.
import pprint
from random import randint
from dsg_lib.common_functions.patterns import pattern_between_two_char
ASCII_LIST = [
" ",
"!",
'""',
"#",
"$",
"%",
"&",
"'",
"(",
")",
"*",
"+",
",",
"-",
".",
"/",
"0",
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
":",
";",
"<",
"=",
">",
"?",
"@",
"A",
"B",
"C",
"D",
"E",
"F",
"G",
"H",
"I",
"J",
"K",
"L",
"M",
"N",
"O",
"P",
"Q",
"R",
"S",
"T",
"U",
"V",
"W",
"X",
"Y",
"Z",
"[",
"\\",
"]",
"^",
"_",
"`",
"a",
"b",
"c",
"d",
"e",
"f",
"g",
"h",
"i",
"j",
"k",
"l",
"m",
"n",
"o",
"p",
"q",
"r",
"s",
"t",
"u",
"v",
"w",
"x",
"y",
"z",
"{",
"|",
"}",
"~",
"€",
"‚",
"ƒ",
"„",
"…",
"†",
"‡",
"ˆ",
"‰",
"Š",
"‹",
"Œ",
"Ž",
"‘",
"’",
"“",
"”",
"•",
"–",
"—",
"˜",
"™",
"š",
"›",
"œ",
"ž",
"Ÿ",
"¡",
"¢",
"£",
"¤",
"¥",
"¦",
"§",
"¨",
"©",
"ª",
"«",
"¬",
"®",
"¯",
"°",
"±",
"²",
"³",
"´",
"µ",
"¶",
"·",
"¸",
"¹",
"º",
"»",
"¼",
"½",
"¾",
"¿",
"À",
"Á",
"Â",
"Ã",
"Ä",
"Å",
"Æ",
"Ç",
"È",
"É",
"Ê",
"Ë",
"Ì",
"Í",
"Î",
"Ï",
"Ð",
"Ñ",
"Ò",
"Ó",
"Ô",
"Õ",
"Ö",
"×",
"Ø",
"Ù",
"Ú",
"Û",
"Ü",
"Ý",
"Þ",
"ß",
"à",
"á",
"â",
"ã",
"ä",
"å",
"æ",
"ç",
"è",
"é",
"ê",
"ë",
"ì",
"í",
"î",
"ï",
"ð",
"ñ",
"ò",
"ó",
"ô",
"õ",
"ö",
"÷",
"ø",
"ù",
"ú",
"û",
"ü",
"ý",
"þ",
"ÿ",
]
pp = pprint.PrettyPrinter(indent=4)
def pattern_find(left_char: str, right_char: str, text_block: str):
data = pattern_between_two_char(text_block, left_char, right_char)
pp.pprint(data)
def run_examples():
text_block = "Lfound oneR Lfound twoR"
left_char = "L"
right_char = "R"
pattern_find(left_char=left_char, right_char=right_char, text_block=text_block)
for _ in range(100):
long_input = "xyz" * randint(100, 100000)
long_text = f"{long_input}abc<one>123<two>456<three>{long_input}"
result = pattern_between_two_char(
text_string=long_text, left_characters="<", right_characters=">"
)
print(result["found"])
if __name__ == "__main__":
run_examples()
file_monitor Example¶
File Monitor Example¶
This module demonstrates the usage of the process_files_flow
function from the dsg_lib.common_functions.file_mover
library.
It monitors a source directory for files matching a specific pattern, processes them, and moves them to a destination directory,
optionally compressing the files during the process.
Features¶
- Directory Monitoring: Watches a source directory for files matching a specified pattern (e.g.,
*.csv
). - File Processing Flow: Utilizes the
process_files_flow
function to handle file movement and optional compression. - Sample File Creation: Periodically generates sample files in the source directory for testing purposes.
- Asynchronous Execution: Leverages Python's
asyncio
for concurrent tasks, such as file creation and processing.
Configuration¶
The following constants can be configured to customize the behavior of the script:
SOURCE_DIRECTORY
: Path to the directory where files are monitored.TEMPORARY_DIRECTORY
: Path to a temporary directory used during file processing.DESTINATION_DIRECTORY
: Path to the directory where processed files are moved.FILE_PATTERN
: File pattern to monitor (e.g.,*.csv
).COMPRESS_FILES
: Boolean flag to enable or disable file compression during processing.CLEAR_SOURCE
: Boolean flag to clear the source directory before starting.
Usage¶
- Ensure the required directories exist. The script will create them if they do not.
- Run the script to start monitoring the source directory and processing files.
- The script will also create sample files in the source directory every 10 seconds for demonstration purposes.
Example¶
To run the script:
python file_monitor.py
Press Ctrl+C
to stop the script.
Dependencies¶
os
andpathlib
: For file and directory operations.asyncio
: For asynchronous task management.loguru
: For logging.dsg_lib.common_functions.file_mover
: For the file processing flow.
Notes¶
- The script is designed for demonstration purposes and may require adjustments for production use.
- Ensure the
dsg_lib
library is installed and accessible in your environment.
Error Handling¶
- The script gracefully handles
KeyboardInterrupt
to stop execution. - The file creation task is canceled when the main function completes.
License¶
This module is licensed under the MIT License.
import asyncio
import os
from pathlib import Path
from loguru import logger
from dsg_lib.common_functions.file_mover import process_files_flow
# Define source, temporary, and destination directories
SOURCE_DIRECTORY: str = "/workspaces/devsetgo_lib/data/move/source/csv"
TEMPORARY_DIRECTORY: str = "/workspaces/devsetgo_lib/data/move/temp"
DESTINATION_DIRECTORY: str = "/workspaces/devsetgo_lib/data/move/destination"
FILE_PATTERN: str = "*.csv" # File pattern to monitor (e.g., '*.txt')
COMPRESS_FILES: bool = True # Set to True to compress files before moving
CLEAR_SOURCE: bool = True # Set to True to clear the source directory before starting
# Ensure directories exist
os.makedirs(SOURCE_DIRECTORY, exist_ok=True)
os.makedirs(TEMPORARY_DIRECTORY, exist_ok=True)
os.makedirs(DESTINATION_DIRECTORY, exist_ok=True)
async def create_sample_files() -> None:
"""
Periodically create sample files in the source directory for demonstration purposes.
This coroutine creates a new sample file every 10 seconds in the source directory.
"""
while True:
# Count existing files to generate a unique file name
file_count: int = len(list(Path(SOURCE_DIRECTORY).glob('*')))
file_name: str = f"sample_{file_count + 1}.txt"
file_path: Path = Path(SOURCE_DIRECTORY) / file_name
file_path.write_text("This is a sample file for testing the file mover.")
logger.info(f"Created sample file: {file_path}")
await asyncio.sleep(10) # Create a new file every 10 seconds
async def main() -> None:
"""
Main function to demonstrate the file mover library.
Starts the sample file creation task and runs the file processing flow in a separate thread.
Cancels the file creation task when processing is complete.
"""
# Start the sample file creation task
file_creator_task: asyncio.Task = asyncio.create_task(create_sample_files())
# Run the file processing flow in a separate thread (to avoid blocking the event loop)
loop: asyncio.AbstractEventLoop = asyncio.get_event_loop()
await loop.run_in_executor(
None,
process_files_flow,
SOURCE_DIRECTORY,
TEMPORARY_DIRECTORY,
DESTINATION_DIRECTORY,
FILE_PATTERN,
COMPRESS_FILES,
CLEAR_SOURCE, # Pass the clear_source flag
)
# Cancel the file creator task when done
file_creator_task.cancel()
try:
await file_creator_task
except asyncio.CancelledError:
logger.info("File creation task cancelled.")
if __name__ == "__main__":
try:
asyncio.run(main())
except KeyboardInterrupt:
logger.info("File monitor example stopped.")
json_example Example¶
JSON Example Module¶
This module demonstrates how to use the open_json
and save_json
functions from the dsg_lib.common_functions.file_functions
package. It provides an example JSON structure and functions to save and load JSON data to and from a file.
Features¶
- Example JSON Data: Contains a dictionary with information about historical figures and their contributions.
- Save JSON Data: Demonstrates saving JSON data to a file using the
save_json
function. - Open JSON Data: Demonstrates loading JSON data from a file using the
open_json
function.
Example JSON Structure¶
The example_json
dictionary includes:
- A list of super_cool_people
with details such as:
- name
: The name of the person.
- famous_for
: A brief description of their contributions.
- birth_date
: Their date of birth.
- death_date
: Their date of death.
- A sources
field indicating the source of the information.
Functions¶
save_some_data(example_json: Dict[str, Any])
¶
Saves the provided JSON data to a file named your-file-name.json
.
open_some_data(the_file_name: str) -> Dict[str, Any]
¶
Loads JSON data from the specified file and returns it as a dictionary.
save_list_json(data: list, file_name: str)
¶
Saves a list of dictionaries as JSON to the specified file.
open_list_json(file_name: str) -> list
¶
Loads a list of dictionaries from the specified JSON file.
try_open_nonexistent_json(file_name: str)
¶
Attempts to open a non-existent JSON file and handles the error.
Usage¶
Run the module directly to:
1. Save the example_json
data to a file.
2. Load the data back from the file.
3. Save and load a list of dictionaries.
4. Attempt to open a non-existent file.
Notes¶
- Ensure the
dsg_lib
package is installed and accessible in your environment. - Replace
"your-file-name.json"
with the desired file name when using the functions in a real-world scenario.
Example Execution¶
python json_example.py
License¶
This module is licensed under the MIT License.
from typing import Any, Dict
from dsg_lib.common_functions.file_functions import open_json, save_json
example_json: Dict[str, Any] = {
"super_cool_people": [
{
"name": "Blaise Pascal",
"famous_for": "Blaise Pascal was a French mathematician, physicist, inventor, writer and Catholic theologian. He was a child prodigy who was educated by his father, a tax collector in Rouen. Pascal's earliest work was in the natural and applied sciences where he made important contributions to the study of fluids, and clarified the concepts of pressure and vacuum by generalising the work of Evangelista Torricelli. Pascal also wrote in defence of the scientific method.", # noqa: E501
"birth_date": "Jun 19, 1623",
"death_date": "Aug 19, 1662",
},
{
"name": "Galileo Galilei",
"famous_for": 'Galileo di Vincenzo Bonaulti de Galilei was an Italian astronomer, physicist and engineer, sometimes described as a polymath, from Pisa. Galileo has been called the "father of observational astronomy", the "father of modern physics", the "father of the scientific method", and the "father of modern science".', # noqa: E501
"birth_date": "Feb 15, 1564",
"death_date": "Jan 08, 1642",
},
{
"name": "Michelangelo di Lodovico Buonarroti Simoni",
"famous_for": "Michelangelo di Lodovico Buonarroti Simoni , known best as simply Michelangelo, was an Italian sculptor, painter, architect and poet of the High Renaissance born in the Republic of Florence, who exerted an unparalleled influence on the development of Western art.", # noqa: E501
"birth_date": "Mar 06, 1475",
"death_date": "Feb 18, 1564",
},
],
"sources": "wikipedia via Google search.",
}
def save_some_data(example_json: Dict[str, Any]) -> None:
"""
Save the provided JSON data to a file named 'your-file-name.json'.
Args:
example_json (Dict[str, Any]): The JSON data to save.
"""
save_json(file_name="your-file-name.json", data=example_json)
def open_some_data(the_file_name: str) -> Dict[str, Any]:
"""
Load JSON data from the specified file.
Args:
the_file_name (str): The name of the JSON file to open.
Returns:
Dict[str, Any]: The loaded JSON data.
"""
result: Dict[str, Any] = open_json(file_name=the_file_name)
return result
# --- Additional Examples ---
simple_list_json: list = [
{"id": 1, "value": "foo"},
{"id": 2, "value": "bar"},
]
def save_list_json(data: list, file_name: str) -> None:
"""
Save a list of dictionaries as JSON.
Args:
data (list): The list of dictionaries to save.
file_name (str): The file name to save to.
"""
save_json(file_name=file_name, data=data)
def open_list_json(file_name: str) -> list:
"""
Load a list of dictionaries from a JSON file.
Args:
file_name (str): The file name to load from.
Returns:
list: The loaded list of dictionaries.
"""
return open_json(file_name=file_name)
def try_open_nonexistent_json(file_name: str) -> None:
"""
Attempt to open a non-existent JSON file and handle the error.
Args:
file_name (str): The file name to attempt to open.
"""
try:
open_json(file_name=file_name)
except FileNotFoundError as e:
print(f"Handled error: {e}")
if __name__ == "__main__":
# Example 1: Save and load a complex dictionary
print("Saving and loading example_json...")
save_some_data(example_json)
opened_file: Dict[str, Any] = open_some_data("your-file-name.json")
print("Loaded example_json:", opened_file)
# Example 2: Save and load a list of dictionaries
print("\nSaving and loading a list of dictionaries...")
save_list_json(simple_list_json, "list-example.json")
loaded_list = open_list_json("list-example.json")
print("Loaded list-example.json:", loaded_list)
# Example 3: Attempt to open a non-existent file
print("\nAttempting to open a non-existent file...")
try_open_nonexistent_json("does_not_exist.json")
text_example Example¶
Text Example Module¶
This module demonstrates basic file operations using the dsg_lib.common_functions.file_functions
library.
It provides examples of saving text data to a file and reading text data from a file.
Functions¶
save_some_data(example_text: str)
¶
Saves the provided text data to a file.
- Parameters:
- example_text
(str): The text data to be saved.
- Behavior:
Calls the save_text
function from dsg_lib.common_functions.file_functions
to save the data to a file named your-file-name.txt
.
open_some_data(the_file_name: str) -> str
¶
Reads text data from a specified file.
- Parameters:
- the_file_name
(str): The name of the file to be read.
- Returns:
- result
(str): The content of the file as a string.
- Behavior:
Calls the open_text
function from dsg_lib.common_functions.file_functions
to read the content of the file.
save_csv_example(csv_data: list[list[str]], file_name: str = "example.csv")
¶
Saves example rows to a CSV file.
- Parameters:
- csv_data
(list[list[str]]): Rows for CSV (first row is header).
- file_name
(str): Target CSV file name.
open_csv_example(file_name: str = "example.csv") -> list[dict]
¶
Opens a CSV file and returns its content as a list of dictionaries.
- Parameters:
- file_name
(str): Name of the CSV file to read.
- Returns:
- list[dict]
: Parsed CSV rows.
save_json_example(data: dict | list, file_name: str = "example.json")
¶
Saves a dictionary or list as JSON.
- Parameters:
- data
(dict|list): Data to serialize.
- file_name
(str): Target JSON file name.
open_json_example(file_name: str = "example.json") -> dict | list
¶
Opens a JSON file and returns its content.
- Parameters:
- file_name
(str): Name of the JSON file to read.
- Returns:
- dict|list
: Parsed JSON content.
Example Usage¶
if __name__ == "__main__":
save_some_data(example_text)
opened_file: str = open_some_data("your-file-name.txt")
print(opened_file)
# CSV example
csv_rows = [
["header1", "header2"],
["row1col1", "row1col2"]
]
save_csv_example(csv_rows)
print(open_csv_example())
# JSON example
json_obj = {"foo": "bar", "count": 1}
save_json_example(json_obj)
print(open_json_example())
Notes¶
- Ensure that the
dsg_lib
library is installed and accessible in your environment. - The file operations assume that the file paths and permissions are correctly configured.
License¶
This module is licensed under the MIT License.
from dsg_lib.common_functions.file_functions import (
open_text, save_text,
save_csv, open_csv,
save_json, open_json
)
example_text = """
<!DOCTYPE html>
<html>
<head>
<title>Page Title</title>
</head>
<body>
<h1>This is a Heading</h1>
<p>This is a paragraph.</p>
</body>
</html>
"""
def save_some_data(example_text: str):
# function requires file_name and data as a string to be sent.
# see documentation for additonal information
save_text(file_name="your-file-name.txt", data=example_text)
def open_some_data(the_file_name: str) -> str:
# function requires file_name and a string will be returned
# see documentation for additonal information
result: str = open_text(file_name=the_file_name)
return result
def save_csv_example(
csv_data: list[list[str]],
file_name: str = "example.csv"
) -> None:
"""
Save example rows to a CSV file.
Args:
csv_data (list[list[str]]): Rows for CSV (first row is header).
file_name (str): Target CSV file name.
"""
# write rows out
save_csv(file_name=file_name, data=csv_data)
def open_csv_example(
file_name: str = "example.csv"
) -> list[dict]:
"""
Open a CSV file and return its content as list of dicts.
Args:
file_name (str): Name of CSV to read.
Returns:
list[dict]: Parsed CSV rows.
"""
return open_csv(file_name=file_name)
def save_json_example(
data: dict | list,
file_name: str = "example.json"
) -> None:
"""
Save a dict or list as JSON.
Args:
data (dict|list): Data to serialize.
file_name (str): Target JSON file name.
"""
save_json(file_name=file_name, data=data)
def open_json_example(
file_name: str = "example.json"
) -> dict | list:
"""
Open a JSON file and return its content.
Args:
file_name (str): Name of JSON to read.
Returns:
dict|list: Parsed JSON content.
"""
return open_json(file_name=file_name)
if __name__ == "__main__":
save_some_data(example_text)
opened_file: str = open_some_data("your-file-name.txt")
print(opened_file)
# CSV example
csv_rows = [
["header1", "header2"],
["row1col1", "row1col2"]
]
save_csv_example(csv_rows)
print(open_csv_example())
# JSON example
json_obj = {"foo": "bar", "count": 1}
save_json_example(json_obj)
print(open_json_example())
validate_emails Example¶
Email Validation Example Script¶
This module demonstrates how to validate a list of email addresses using various configurations. It leverages the validate_email_address
function from the dsg_lib.common_functions.email_validation
module to perform the validation.
The script is designed to: - Validate a predefined list of email addresses. - Use multiple configurations to test different validation scenarios. - Measure and display the time taken to validate all email addresses. - Print the validation results in a sorted order for better readability.
Features¶
- Email Validation: Checks the validity of email addresses based on various configurations.
- Custom Configurations: Supports multiple validation options such as deliverability checks, allowing quoted local parts, and more.
- Performance Measurement: Tracks the time taken to validate all email addresses.
- Result Sorting: Outputs the validation results in a sorted format for easier analysis.
Usage¶
Run the script as a standalone module:
$ python validate_emails.py
Attributes¶
Email Addresses¶
A predefined list of email addresses to validate. The list includes: - Valid email addresses. - Invalid email addresses. - Edge cases such as emails with non-ASCII characters, quoted local parts, and domain literals.
Configurations¶
A list of dictionaries, where each dictionary represents a validation configuration. Configuration options include:
- check_deliverability
(bool): Whether to check if the email address is deliverable.
- test_environment
(bool): Whether the function is being run in a test environment.
- allow_smtputf8
(bool): Whether to allow non-ASCII characters in the email address.
- allow_empty_local
(bool): Whether to allow email addresses with an empty local part.
- allow_quoted_local
(bool): Whether to allow email addresses with a quoted local part.
- allow_display_name
(bool): Whether to allow email addresses with a display name.
- allow_domain_literal
(bool): Whether to allow email addresses with a domain literal.
- globally_deliverable
(bool): Whether the email address should be globally deliverable.
- timeout
(int): The timeout for the validation in seconds.
- dns_type
(str): The type of DNS to use for the validation. Can be 'dns'
or 'timeout'
.
Functions¶
validate_email_address(email: str, **kwargs: dict) -> dict
¶
Validates an email address using the provided configuration and returns a dictionary with the results.
Example Output¶
The script outputs the validation results in a sorted order, along with the time taken for the validation process. Each result includes: - The email address. - The validation status. - Additional metadata based on the configuration used.
License¶
This module is licensed under the MIT License.
import pprint
import time
from typing import List, Dict, Any
from dsg_lib.common_functions.email_validation import validate_email_address
def run_validation(
email_addresses: List[str],
configurations: List[Dict[str, Any]],
) -> List[Dict[str, Any]]:
"""
Validate each email against multiple configurations.
Args:
email_addresses: List of email strings to validate.
configurations: List of parameter dicts for validation.
Returns:
A sorted list of result dicts (sorted by "email" key).
"""
results: List[Dict[str, Any]] = []
# iterate over every email and config combination
for email in email_addresses:
for config in configurations:
# call the core email validator and collect its output
res = validate_email_address(email, **config)
results.append(res)
# sort by email for consistent output
return sorted(results, key=lambda x: x["email"])
def main() -> None:
"""
Entry point for the email validation example.
Defines a list of emails and configurations, measures execution time,
runs validation, and pretty‑prints the results.
"""
# list of example email addresses
email_addresses: List[str] = [
"bob@devsetgo.com",
"bob@devset.go",
"foo@yahoo.com",
"bob@gmail.com",
"very fake@devsetgo.com",
"jane.doe@example.com",
"john_doe@example.co.uk",
"user.name+tag+sorting@example.com",
"x@example.com", # shortest possible email address
"example-indeed@strange-example.com",
"admin@mailserver1", # local domain name with no TLD
"example@s.example", # see the list of Internet top-level domains
'" "@example.org', # space between the quotes
'"john..doe"@example.org', # quoted double dot
"mailhost!username@example.org", # bangified host route used for uucp mailers
"user%example.com@example.org", # percent sign in local part
"user-@example.org", # valid due to the last character being an allowed character
# Invalid email addresses
"Abc.example.com", # no @ character
"A@b@c@example.com", # only one @ is allowed outside quotation marks
'a"b(c)d,e:f;g<h>i[j\\k]l@example.com', # none of the special characters in this local part are allowed outside quotation marks
'just"not"right@example.com', # quoted strings must be dot separated or the only element making up the local-part
'this is"not\\allowed@example.com', # spaces, quotes, and backslashes may only exist when within quoted strings and preceded by a backslash
'this\\ still\\"not\\\\allowed@example.com', # even if escaped (preceded by a backslash), spaces, quotes, and backslashes must still be contained by quotes
"1234567890123456789012345678901234567890123456789012345678901234+x@example.com", # local part is longer than 64 characters
# Emails with empty local part
"@example.com", # only valid if allow_empty_local is True
# Emails with non-ASCII characters
"üñîçøðé@example.com", # only valid if allow_smtputf8 is True
"user@üñîçøðé.com", # only valid if allow_smtputf8 is True
# Emails with quoted local part
'"john.doe"@example.com', # only valid if allow_quoted_local is True
'"john..doe"@example.com', # only valid if allow_quoted_local is True
# Emails with display name
"John Doe <john@example.com>", # only valid if allow_display_name is True
# Emails with domain literal
"user@[192.0.2.1]", # only valid if allow_domain_literal is True
# Emails with long local part
"a" * 65 + "@example.com", # local part is longer than 64 characters
# Emails with invalid characters
"john doe@example.com", # space is not allowed
"john@doe@example.com", # only one @ is allowed
"john.doe@.com", # domain can't start with a dot
"john.doe@example..com", # domain can't have two consecutive dots
"test@google.com",
]
# various validation parameter sets to exercise different rules
configurations: List[Dict[str, Any]] = [
{
"check_deliverability": True,
"test_environment": False,
"allow_smtputf8": False,
"allow_empty_local": False,
"allow_quoted_local": False,
"allow_display_name": False,
"allow_domain_literal": False,
"globally_deliverable": None,
"timeout": 10,
"dns_type": "timeout",
},
{
"check_deliverability": False,
"test_environment": True,
"allow_smtputf8": True,
"allow_empty_local": True,
"allow_quoted_local": True,
"allow_display_name": True,
"allow_domain_literal": True,
"globally_deliverable": None,
"timeout": 5,
"dns_type": "dns",
},
{"check_deliverability": True},
{
"check_deliverability": False,
"test_environment": False,
"allow_smtputf8": True,
"allow_empty_local": False,
"allow_quoted_local": True,
"allow_display_name": False,
"allow_domain_literal": True,
"globally_deliverable": None,
"timeout": 15,
"dns_type": "timeout",
},
{
"check_deliverability": True,
"test_environment": True,
"allow_smtputf8": False,
"allow_empty_local": True,
"allow_quoted_local": False,
"allow_display_name": True,
"allow_domain_literal": False,
"globally_deliverable": None,
"timeout": 20,
"dns_type": "dns",
},
{
"check_deliverability": False,
"test_environment": False,
"allow_smtputf8": True,
"allow_empty_local": True,
"allow_quoted_local": True,
"allow_display_name": True,
"allow_domain_literal": True,
"globally_deliverable": None,
"timeout": 25,
"dns_type": "timeout",
},
{
"check_deliverability": True,
"test_environment": True,
"allow_smtputf8": False,
"allow_empty_local": False,
"allow_quoted_local": False,
"allow_display_name": False,
"allow_domain_literal": False,
"globally_deliverable": None,
"timeout": 30,
"dns_type": "dns",
},
{
"check_deliverability": False,
"test_environment": True,
"allow_smtputf8": True,
"allow_empty_local": False,
"allow_quoted_local": True,
"allow_display_name": True,
"allow_domain_literal": False,
"globally_deliverable": None,
"timeout": 35,
"dns_type": "timeout",
},
{
"check_deliverability": True,
"test_environment": False,
"allow_smtputf8": False,
"allow_empty_local": True,
"allow_quoted_local": True,
"allow_display_name": False,
"allow_domain_literal": True,
"globally_deliverable": None,
"timeout": 40,
"dns_type": "dns",
},
{
"check_deliverability": False,
"test_environment": True,
"allow_smtputf8": True,
"allow_empty_local": False,
"allow_quoted_local": False,
"allow_display_name": True,
"allow_domain_literal": True,
"globally_deliverable": None,
"timeout": 45,
"dns_type": "timeout",
},
]
# measure and run
start_time: float = time.time()
results = run_validation(email_addresses, configurations)
elapsed: float = time.time() - start_time
# output each result
for record in results:
pprint.pprint(record, indent=4)
print(f"Time taken: {elapsed:.2f}s")
if __name__ == "__main__":
main()
fastapi_example Example¶
FastAPI Example Module¶
This module demonstrates the use of FastAPI in conjunction with the DevSetGo Toolkit to create a fully functional API. It includes examples of database operations, user management, and system health endpoints. The module is designed to showcase best practices for building scalable and maintainable FastAPI applications.
Features¶
- Database Integration:
- Uses SQLAlchemy for ORM and database interactions.
- Supports SQLite (in-memory) for demonstration purposes.
-
Includes models for
User
andAddress
tables with relationships. -
API Endpoints:
- CRUD operations for
User
records. - Bulk operations for creating and deleting records.
- System health endpoints for monitoring uptime, heap dumps, and status.
-
Robots.txt endpoint for bot management.
-
Logging:
- Configured using
loguru
for structured and detailed logging. -
Logs API requests, database operations, and system events.
-
Asynchronous Operations:
- Fully asynchronous database operations using
asyncpg
andaiosqlite
. -
Asynchronous lifespan management for startup and shutdown events.
-
Configuration:
- Modular configuration for database, logging, and API behavior.
- Bot management configuration for controlling access to the API.
Usage¶
-
Run the Application: Use the following command to start the FastAPI application:
uvicorn fastapi_example:app --host 127.0.0.1 --port 5001
-
Access the API:
- OpenAPI Documentation: http://127.0.0.1:5001/docs
-
ReDoc Documentation: http://127.0.0.1:5001/redoc
-
Database Operations:
- Use the provided endpoints to perform CRUD operations on the
User
andAddress
tables. -
Example endpoints include:
/database/create-one-record
/database/get-all
/database/delete-one-record
-
Health Monitoring:
- Access system health endpoints under
/api/health
.
Dependencies¶
FastAPI
: Web framework for building APIs.SQLAlchemy
: ORM for database interactions.loguru
: Logging library for structured logs.tqdm
: Progress bar for bulk operations.pydantic
: Data validation and settings management.DevSetGo Toolkit
: Custom library for database and common utility functions.
License¶
This module is licensed under the MIT License.
import datetime
import secrets
import time
from contextlib import asynccontextmanager
from fastapi import Body, FastAPI, Query
from fastapi.responses import RedirectResponse
from loguru import logger
from pydantic import BaseModel, EmailStr
from sqlalchemy import Column, ForeignKey, Select, String, and_, delete, insert, or_, update
from sqlalchemy.orm import relationship
from tqdm import tqdm
from dsg_lib.async_database_functions import (
async_database,
base_schema,
database_config,
database_operations,
)
from dsg_lib.common_functions import logging_config
from dsg_lib.fastapi_functions import default_endpoints, system_health_endpoints
config = [
{"bot": "Bytespider", "allow": False},
{"bot": "GPTBot", "allow": False},
{"bot": "ClaudeBot", "allow": True},
{"bot": "ImagesiftBot", "allow": True},
{"bot": "CCBot", "allow": False},
{"bot": "ChatGPT-User", "allow": True},
{"bot": "omgili", "allow": False},
{"bot": "Diffbot", "allow": False},
{"bot": "Claude-Web", "allow": True},
{"bot": "PerplexityBot", "allow": False},
]
logging_config.config_log(
logging_level="INFO",
log_serializer=False,
logging_directory="log",
log_name="log.log",
intercept_standard_logging=False,
)
# Create a DBConfig instance
config = {
# "database_uri": "postgresql+asyncpg://postgres:postgres@postgresdb/postgres",
"database_uri": "sqlite+aiosqlite:///:memory:?cache=shared",
"echo": False,
"future": True,
# "pool_pre_ping": True,
# "pool_size": 10,
# "max_overflow": 10,
"pool_recycle": 3600,
# "pool_timeout": 30,
}
# create database configuration
db_config = database_config.DBConfig(config)
# Create an AsyncDatabase instance
async_db = async_database.AsyncDatabase(db_config)
# Create a DatabaseOperations instance
db_ops = database_operations.DatabaseOperations(async_db)
class User(base_schema.SchemaBaseSQLite, async_db.Base):
"""
User table storing user details like first name, last name, and email
"""
__tablename__ = "users"
__table_args__ = {
"comment": "User table storing user details like first name, last name, and email"
}
first_name = Column(String(50), unique=False, index=True) # First name of the user
last_name = Column(String(50), unique=False, index=True) # Last name of the user
email = Column(
String(200), unique=True, index=True, nullable=True
) # Email of the user, must be unique
addresses = relationship(
"Address", order_by="Address.pkid", back_populates="user"
) # Relationship to the Address class
class Address(base_schema.SchemaBaseSQLite, async_db.Base):
"""
Address table storing address details like street, city, and zip code
"""
__tablename__ = "addresses"
__table_args__ = {
"comment": "Address table storing address details like street, city, and zip code"
}
street = Column(String(200), unique=False, index=True) # Street of the address
city = Column(String(200), unique=False, index=True) # City of the address
zip = Column(String(50), unique=False, index=True) # Zip code of the address
user_id = Column(
String(36), ForeignKey("users.pkid")
) # Foreign key to the User table
user = relationship(
"User", back_populates="addresses"
) # Relationship to the User class
@asynccontextmanager
async def lifespan(app: FastAPI):
logger.info("starting up")
# Create the tables in the database
await async_db.create_tables()
create_users = True
if create_users:
await create_a_bunch_of_users(single_entry=2, many_entries=100)
yield
logger.info("shutting down")
await async_db.disconnect()
logger.info("database disconnected")
print("That's all folks!")
# Create an instance of the FastAPI class
app = FastAPI(
title="FastAPI Example", # The title of the API
description="This is an example of a FastAPI application using the DevSetGo Toolkit.", # A brief description of the API
version="0.1.0", # The version of the API
docs_url="/docs", # The URL where the API documentation will be served
redoc_url="/redoc", # The URL where the ReDoc documentation will be served
openapi_url="/openapi.json", # The URL where the OpenAPI schema will be served
debug=True, # Enable debug mode
middleware=[], # A list of middleware to include in the application
routes=[], # A list of routes to include in the application
lifespan=lifespan, # this is the replacement for the startup and shutdown events
)
@app.get("/")
async def root():
"""
Redirect to the OpenAPI documentation.
Example:
GET /
Returns:
Redirects to /docs for interactive API documentation.
"""
logger.info("Redirecting to OpenAPI docs")
response = RedirectResponse(url="/docs")
return response
# Example configuration
config = {
"enable_status_endpoint": True,
"enable_uptime_endpoint": True,
"enable_heapdump_endpoint": True,
"enable_robots_endpoint": True,
"user_agents": [
{"bot": "Bytespider", "allow": False},
{"bot": "GPTBot", "allow": False},
{"bot": "ClaudeBot", "allow": True},
{"bot": "ImagesiftBot", "allow": True},
{"bot": "CCBot", "allow": False},
{"bot": "ChatGPT-User", "allow": True},
{"bot": "omgili", "allow": False},
{"bot": "Diffbot", "allow": False},
{"bot": "Claude-Web", "allow": True},
{"bot": "PerplexityBot", "allow": False},
{"bot": "Googlebot", "allow": True},
{"bot": "Bingbot", "allow": True},
{"bot": "Baiduspider", "allow": False},
{"bot": "YandexBot", "allow": False},
{"bot": "DuckDuckBot", "allow": True},
{"bot": "Sogou", "allow": False},
{"bot": "Exabot", "allow": False},
{"bot": "facebot", "allow": False},
{"bot": "ia_archiver", "allow": False},
],
}
# Create and include the health router if enabled
if (
config["enable_status_endpoint"]
or config["enable_uptime_endpoint"]
or config["enable_heapdump_endpoint"]
):
health_router = system_health_endpoints.create_health_router(config)
app.include_router(health_router, prefix="/api/health", tags=["system-health"])
# Create and include the default router if enabled
if config["enable_robots_endpoint"]:
default_router = default_endpoints.create_default_router(config["user_agents"])
app.include_router(default_router, prefix="", tags=["default"])
async def create_a_bunch_of_users(single_entry=0, many_entries=0):
logger.info(f"single_entry: {single_entry}")
await async_db.create_tables()
# Create a list to hold the user data
# Create a loop to generate user data
for _ in tqdm(range(single_entry), desc="executing one"):
value = secrets.token_hex(16)
user = User(
first_name=f"First{value}",
last_name=f"Last{value}",
email=f"user{value}@example.com",
)
logger.info(f"created_users: {user}")
await db_ops.create_one(user)
users = []
# Create a loop to generate user data
for i in tqdm(range(many_entries), desc="executing many"):
value_one = secrets.token_hex(4)
value_two = secrets.token_hex(8)
user = User(
first_name=f"First{value_one}{i}{value_two}",
last_name=f"Last{value_one}{i}{value_two}",
email=f"user{value_one}{i}{value_two}@example.com",
)
logger.info(f"created_users: {user.first_name}")
users.append(user)
# Use db_ops to add the users to the database
await db_ops.create_many(users)
@app.get("/database/get-primary-key", tags=["Database Examples"])
async def table_primary_key():
"""
Get the primary key(s) of the User table.
Example:
GET /database/get-primary-key
Returns:
The primary key column(s) for the User table.
"""
logger.info("Getting primary key of User table")
pk = await db_ops.get_primary_keys(User)
logger.info(f"Primary key of User table: {pk}")
return {"pk": pk}
@app.get("/database/get-column-details", tags=["Database Examples"])
async def table_column_details():
"""
Get details about all columns in the User table.
Example:
GET /database/get-column-details
Returns:
Metadata for each column in the User table.
"""
logger.info("Getting column details of User table")
columns = await db_ops.get_columns_details(User)
logger.info(f"Column details of User table: {columns}")
return {"columns": columns}
@app.get("/database/get-tables", tags=["Database Examples"])
async def table_table_details():
"""
List all table names in the database.
Example:
GET /database/get-tables
Returns:
A list of all table names.
"""
logger.info("Getting table names")
tables = await db_ops.get_table_names()
logger.info(f"Table names: {tables}")
return {"table_names": tables}
@app.get("/database/get-count", tags=["Database Examples"])
async def get_count():
"""
Get the total number of User records.
Example:
GET /database/get-count
Returns:
The count of User records.
"""
logger.info("Getting count of users")
count = await db_ops.count_query(Select(User))
logger.info(f"Count of users: {count}")
return {"count": count}
@app.get("/database/get-all", tags=["Database Examples"])
async def get_all(offset: int = 0, limit: int = Query(100, le=100000, ge=1)):
"""
Retrieve all User records with pagination.
Example:
GET /database/get-all?offset=0&limit=10
Returns:
A list of User records.
"""
logger.info(f"Getting all users with offset {offset} and limit {limit}")
records = await db_ops.read_query(Select(User).offset(offset).limit(limit))
logger.info(f"Retrieved {len(records)} users")
return {"records": records}
@app.get("/database/get-one-record", tags=["Database Examples"])
async def read_one_record(record_id: str):
"""
Retrieve a single User record by primary key.
Example:
GET /database/get-one-record?record_id=some-uuid
Returns:
The User record with the given primary key.
"""
logger.info(f"Reading one record with id {record_id}")
record = await db_ops.read_one_record(Select(User).where(User.pkid == record_id))
logger.info(f"Record with id {record_id}: {record}")
return record
class UserBase(BaseModel):
first_name: str
last_name: str
email: EmailStr
class UserCreate(UserBase):
pass
@app.post("/database/create-one-record", status_code=201, tags=["Database Examples"])
async def create_one_record(new_user: UserCreate):
"""
Create a new User record.
Example:
POST /database/create-one-record
{
"first_name": "Alice",
"last_name": "Smith",
"email": "alice@example.com"
}
Returns:
The created User record.
"""
logger.info(f"Creating one record: {new_user}")
user = User(**new_user.dict())
record = await db_ops.create_one(user)
logger.info(f"Created record: {record}")
return record
@app.post("/database/create-many-records", status_code=201, tags=["Database Examples"])
async def create_many_records(number_of_users: int = Query(100, le=1000, ge=1)):
"""
Create multiple User records in bulk.
Example:
POST /database/create-many-records?number_of_users=10
Returns:
The number of users created and the process time.
"""
logger.info(f"Creating {number_of_users} records")
t0 = time.time()
users = []
# Create a loop to generate user data
for i in tqdm(range(number_of_users), desc="executing many"):
value_one = secrets.token_hex(4)
value_two = secrets.token_hex(8)
user = User(
first_name=f"First{value_one}{i}{value_two}",
last_name=f"Last{value_one}{i}{value_two}",
email=f"user{value_one}{i}{value_two}@example.com",
)
logger.info(f"Created user: {user.first_name}")
users.append(user)
# Use db_ops to add the users to the database
await db_ops.create_many(users)
t1 = time.time()
process_time = format(t1 - t0, ".4f")
logger.info(f"Created {number_of_users} records in {process_time} seconds")
return {"number_of_users": number_of_users, "process_time": process_time}
@app.put("/database/update-one-record", status_code=200, tags=["Database Examples"])
async def update_one_record(
id: str = Body(
...,
description="UUID to update",
examples=["6087cce8-0bdd-48c2-ba96-7d557dae843e"],
),
first_name: str = Body(..., examples=["Agent"]),
last_name: str = Body(..., examples=["Smith"]),
email: str = Body(..., examples=["jim@something.com"]),
):
"""
Update a User record by primary key.
Example:
PUT /database/update-one-record
{
"id": "some-uuid",
"first_name": "Agent",
"last_name": "Smith",
"email": "jim@something.com"
}
Returns:
The updated User record.
"""
logger.info(f"Updating one record with id {id}")
# adding date_updated to new_values as it is not supported in sqlite \
# and other database may not either.
new_values = {
"first_name": first_name,
"last_name": last_name,
"email": email,
"date_updated": datetime.datetime.now(datetime.timezone.utc),
}
record = await db_ops.update_one(table=User, record_id=id, new_values=new_values)
logger.info(f"Updated record with id {id}")
return record
@app.delete("/database/delete-one-record", status_code=200, tags=["Database Examples"])
async def delete_one_record(record_id: str = Body(...)):
"""
Delete a User record by primary key.
Example:
DELETE /database/delete-one-record
{
"record_id": "some-uuid"
}
Returns:
Success message or error.
"""
logger.info(f"Deleting one record with id {record_id}")
record = await db_ops.delete_one(table=User, record_id=record_id)
logger.info(f"Deleted record with id {record_id}")
return record
@app.delete(
"/database/delete-many-records-aka-this-is-a-bad-idea",
status_code=201,
tags=["Database Examples"],
)
async def delete_many_records(
id_values: list = Body(...), id_column_name: str = "pkid"
):
"""
Delete multiple User records by a list of primary keys.
Example:
DELETE /database/delete-many-records-aka-this-is-a-bad-idea
{
"id_values": ["uuid1", "uuid2", "uuid3"]
}
Returns:
The number of records deleted.
"""
logger.info(f"Deleting many records with ids {id_values}")
record = await db_ops.delete_many(
table=User, id_column_name="pkid", id_values=id_values
)
logger.info(f"Deleted records with ids {id_values}")
return record
@app.get(
"/database/get-list-of-records-to-paste-into-delete-many-records",
tags=["Database Examples"],
)
async def read_list_of_records(
offset: int = Query(0, le=1000, ge=0), limit: int = Query(100, le=10000, ge=1)
):
"""
Get a list of User primary keys for use in bulk delete.
Example:
GET /database/get-list-of-records-to-paste-into-delete-many-records?offset=0&limit=10
Returns:
A list of User primary keys.
"""
logger.info(f"Reading list of records with offset {offset} and limit {limit}")
records = await db_ops.read_query(Select(User), offset=offset, limit=limit)
records_list = []
for record in records:
records_list.append(record.pkid)
logger.info(f"Read list of records: {records_list}")
return records_list
@app.get("/database/get-list-of-distinct-records", tags=["Database Examples"])
async def read_list_of_distinct_records():
"""
Insert many similar User records and return distinct last names.
Example:
GET /database/get-list-of-distinct-records
Returns:
A list of distinct last names.
"""
# create many similar records to test distinct
queries = []
for i in tqdm(range(100), desc="executing many fake users"):
value = f"Agent {i}"
queries.append(
(
insert(User),
{
"first_name": value,
"last_name": "Smith",
"email": f"{value.lower()}@abc.com",
},
)
)
results = await db_ops.execute_many(queries)
print(results)
distinct_last_name_query = Select(User.last_name).distinct()
logger.info(f"Executing query: {distinct_last_name_query}")
records = await db_ops.read_query(query=distinct_last_name_query)
logger.info(f"Read list of distinct records: {records}")
return records
@app.post("/database/execute-one", tags=["Database Examples"])
async def execute_query(query: str = Body(...)):
"""
Example of running a single SQL query (insert) using execute_one.
Example:
POST /database/execute-one
{
"query": "insert example (not used, see code)"
}
Returns:
The inserted User record(s) with first_name "John".
"""
# add a user with execute_one
logger.info(f"Executing query: {query}")
query = insert(User).values(first_name="John", last_name="Doe", email="x@abc.com")
result = await db_ops.execute_one(query)
logger.info(f"Executed query: {result}")
query_return = await db_ops.read_query(
Select(User).where(User.first_name == "John")
)
return query_return
@app.post("/database/execute-many", tags=["Database Examples"])
async def execute_many(query: str = Body(...)):
"""
Example of running multiple SQL queries (bulk insert) using execute_many.
Example:
POST /database/execute-many
{
"query": "bulk insert example (not used, see code)"
}
Returns:
All User records after bulk insert.
"""
# multiple users with execute_many
logger.info(f"Executing query: {query}")
queries = []
for i in range(10):
query = insert(User).values(
first_name=f"User{i}", last_name="Doe", email="x@abc.com"
)
queries.append(query)
results = await db_ops.execute_many(queries)
logger.info(f"Executed query: {results}")
query_return = await db_ops.read_query(Select(User))
return query_return
@app.get("/database/get-distinct-emails", tags=["Database Examples"])
async def get_distinct_emails():
"""
Get a list of distinct emails from the User table.
Example:
GET /database/get-distinct-emails
Returns:
A list of unique email addresses.
"""
from sqlalchemy import select
query = select(User.email).distinct()
logger.info("Getting distinct emails")
records = await db_ops.read_query(query)
return {"distinct_emails": records}
@app.get("/database/get-users-by-email", tags=["Database Examples"])
async def get_users_by_email(email: str):
"""
Get User records by email address.
Example:
GET /database/get-users-by-email?email=alice@example.com
Returns:
A list of User records matching the email.
"""
query = Select(User).where(User.email == email)
logger.info(f"Getting users with email: {email}")
records = await db_ops.read_query(query)
return {"users": records}
@app.get("/database/get-users-by-name", tags=["Database Examples"])
async def get_users_by_name(first_name: str = "", last_name: str = ""):
"""
Get User records by first and/or last name.
Example:
GET /database/get-users-by-name?first_name=Alice&last_name=Smith
Returns:
A list of User records matching the name.
"""
filters = []
if first_name:
filters.append(User.first_name == first_name)
if last_name:
filters.append(User.last_name == last_name)
query = Select(User).where(and_(*filters)) if filters else Select(User)
logger.info(f"Getting users by name: {first_name} {last_name}")
records = await db_ops.read_query(query)
return {"users": records}
@app.get("/database/get-users-or", tags=["Database Examples"])
async def get_users_or(first_name: str = "", last_name: str = ""):
"""
Get User records where first name OR last name matches.
Example:
GET /database/get-users-or?first_name=Alice
Returns:
A list of User records matching either name.
"""
filters = []
if first_name:
filters.append(User.first_name == first_name)
if last_name:
filters.append(User.last_name == last_name)
query = Select(User).where(or_(*filters)) if filters else Select(User)
logger.info(f"Getting users by OR: {first_name} {last_name}")
records = await db_ops.read_query(query)
return {"users": records}
@app.get("/database/get-multi-query", tags=["Database Examples"])
async def get_multi_query():
"""
Run multiple queries at once and return results as a dictionary.
Example:
GET /database/get-multi-query
Returns:
A dictionary with results for each query.
"""
queries = {
"all_users": Select(User),
"distinct_emails": Select(User.email).distinct(),
"first_10": Select(User).limit(10),
}
logger.info("Running multi-query example")
results = await db_ops.read_multi_query(queries)
return results
@app.put("/database/update-email", tags=["Database Examples"])
async def update_email(record_id: str = Body(...), new_email: str = Body(...)):
"""
Update a User's email address by primary key.
Example:
PUT /database/update-email
{
"record_id": "some-uuid",
"new_email": "new@email.com"
}
Returns:
Result of the update operation.
"""
query = update(User).where(User.pkid == record_id).values(email=new_email)
logger.info(f"Updating email for user {record_id} to {new_email}")
result = await db_ops.execute_one(query)
return {"result": result}
@app.delete("/database/delete-by-email", tags=["Database Examples"])
async def delete_by_email(email: str = Body(...)):
"""
Delete User records by email address.
Example:
DELETE /database/delete-by-email
{
"email": "alice@example.com"
}
Returns:
Result of the delete operation.
"""
query = delete(User).where(User.email == email)
logger.info(f"Deleting users with email {email}")
result = await db_ops.execute_one(query)
return {"result": result}
@app.post("/database/insert-bulk", tags=["Database Examples"])
async def insert_bulk(count: int = Body(5)):
"""
Bulk insert User records using execute_many.
Example:
POST /database/insert-bulk
{
"count": 10
}
Returns:
Result of the bulk insert operation.
"""
queries = []
for i in range(count):
value = secrets.token_hex(4)
q = (
insert(User),
{
"first_name": f"Bulk{value}{i}",
"last_name": f"User{value}{i}",
"email": f"bulk{value}{i}@example.com",
},
)
queries.append(q)
logger.info(f"Bulk inserting {count} users")
result = await db_ops.execute_many(queries)
return {"result": result}
@app.get("/database/error-example", tags=["Database Examples"])
async def error_example():
"""
Trigger an error to demonstrate error handling.
Example:
GET /database/error-example
Returns:
Error details from a failed query.
"""
# Try to select from a non-existent table
from sqlalchemy import text
query = text("SELECT * FROM non_existent_table")
logger.info("Triggering error example")
result = await db_ops.read_query(query)
return {"result": result}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="127.0.0.1", port=5001)
csv_example_with_timer Example¶
CSV Example with Timer¶
This module demonstrates how to generate and save CSV files at regular intervals using Python. It includes functionality to create sample data, save it to a CSV file, and repeat the process indefinitely with a specified delay.
Features¶
-
Dynamic Data Generation: The
create_sample_list
function generates a list of lists with a customizable number of rows. Each row contains sample data with predefined headers. -
Automated File Saving: The
save_data_with_timer
function saves the generated data to a CSV file every 5 seconds. Each file is uniquely named with a timestamp to avoid overwriting. -
Customizable CSV Format: The CSV files are saved with a pipe (
|
) as the delimiter and double quotes ("
) as the quote character. -
Logging Support: The module uses a logging configuration to provide debug-level logging for better traceability.
Use Case¶
This module is ideal for scenarios where continuous data generation and saving are required, such as testing, simulations, or data pipeline prototyping.
Directory Structure¶
The generated CSV files are saved in the following directory:
/workspaces/devsetgo_lib/data/move/source
How to Run¶
To execute the script, simply run it as a standalone program:
python csv_example_with_timer.py
The script will continuously generate and save CSV files until manually stopped.
Dependencies¶
dsg_lib.common_functions.file_functions.save_csv
: A utility function to save data to a CSV file.dsg_lib.common_functions.logging_config.config_log
: A utility function to configure logging.
License¶
This module is licensed under the MIT License.
import random
import time
from datetime import datetime
from dsg_lib.common_functions.file_functions import save_csv
from dsg_lib.common_functions.logging_config import config_log
config_log(logging_level="DEBUG")
example_list = [
["thing_one", "thing_two"],
["a", "b"],
["c", "d"],
["e", "f"],
["g", "h"],
]
def create_sample_list(qty=10):
"""
Create a sample list of lists with specified quantity.
"""
headers = ["thing_one", "thing_two", "thing_three", "thing_four", "thing_five"]
sample_list = [headers]
for i in range(qty):
sample_list.append(
[f"item_{i+1}", f"item_{i+2}", f"item_{i+3}", f"item_{i+4}", f"item_{i+5}"]
)
return sample_list
def save_data_with_timer():
"""
Saves a new CSV file every 5 seconds with a unique timestamped name.
This function generates a sample list of data with a random number of rows
(between 10 and 100,000) using the `create_sample_list` function. It then
saves this data to a CSV file in the specified directory. The file name
includes a timestamp to ensure uniqueness. The CSV file is saved with a
pipe (`|`) as the delimiter and double quotes (`"`) as the quote character.
The process repeats indefinitely, with a 5-second delay between each file
creation. This function is useful for testing or simulating scenarios where
data is continuously generated and saved to disk.
The saved files are stored in the `/workspaces/devsetgo_lib/data/move/source`
directory.
"""
while True:
example_list = create_sample_list(qty=random.randint(10, 100000))
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
file_name = f"data_{timestamp}.csv"
save_csv(
file_name=file_name,
data=example_list,
root_folder="/workspaces/devsetgo_lib/data/move/source",
delimiter="|",
quotechar='"',
)
print(f"Saved file: {file_name}")
time.sleep(5)
if __name__ == "__main__":
save_data_with_timer()
csv_example Example¶
CSV Example Module¶
This module provides examples of how to work with CSV files using the dsg_lib
library. It includes functions for saving data to a CSV file, opening and reading data from a CSV file, appending data to an existing CSV file, deleting a CSV file, and creating sample files for testing purposes. The module is designed to demonstrate the usage of the file_functions
and logging_config
utilities provided by dsg_lib
.
Functions¶
save_some_data(example_list: list)
¶
Saves a list of data to a CSV file. The function uses the save_csv
utility from dsg_lib
to write the data to a file. The file is saved with a specified delimiter and quote character.
- Parameters:
example_list
(list): A list of lists containing the data to be saved.- Notes:
- The file is saved in the
/data
directory with the nameyour-file-name.csv
. - The delimiter used is
|
, and the quote character is"
. - Refer to the
save_csv
documentation for additional options.
open_some_data(the_file_name: str) -> dict
¶
Opens a CSV file and returns its contents as a dictionary. This function assumes the CSV file has a header row and uses the open_csv
utility from dsg_lib
.
- Parameters:
the_file_name
(str): The name of the CSV file to open.- Returns:
dict
: A dictionary representation of the CSV file's contents.- Notes:
- Additional options such as delimiter, quote level, and space handling can be configured.
- Refer to the Python CSV documentation for more details: Python CSV Documentation.
append_some_data(rows: list)
¶
Appends rows to an existing CSV file. The function uses the append_csv
utility from dsg_lib
.
- Parameters:
rows
(list): A list of lists containing the rows to append. The header must match the existing file.
delete_example_file(file_name: str)
¶
Deletes a CSV file. The function uses the delete_file
utility from dsg_lib
.
- Parameters:
file_name
(str): The name of the file to delete.
sample_files()
¶
Creates sample files for testing purposes. This function uses the create_sample_files
utility from dsg_lib
.
- Notes:
- The sample file is named
test_sample
and contains 1000 rows of data.
Example Usage¶
if __name__ == "__main__":
# Save example data to a CSV file
save_some_data(example_list)
# Open and read data from a CSV file
opened_file = open_some_data("your-file-name.csv")
print("Opened CSV data:", opened_file)
# Append data to an existing CSV file
rows_to_append = [
["thing_one", "thing_two"], # header row (must match)
["i", "j"],
["k", "l"],
]
append_some_data(rows_to_append)
# Delete the CSV file
delete_example_file("your-file-name.csv")
# Create sample files for testing
sample_files()
Logging¶
The module configures logging using the config_log
utility from dsg_lib
. The logging level is set to DEBUG
to provide detailed information during execution.
License¶
This module is licensed under the MIT License.
from typing import List, Dict, Any
from dsg_lib.common_functions.file_functions import create_sample_files, open_csv, save_csv
from dsg_lib.common_functions.logging_config import config_log
config_log(logging_level="DEBUG")
example_list = [
["thing_one", "thing_two"],
["a", "b"],
["c", "d"],
["e", "f"],
["g", "h"],
]
def save_some_data(example_list: List[List[str]]) -> None:
"""
Save a list of lists to a CSV file using dsg_lib's save_csv.
Args:
example_list (List[List[str]]): Data to save, including header as first row.
"""
# Save data to CSV with custom delimiter and quote character
save_csv(
file_name="your-file-name.csv",
data=example_list,
root_folder="/data",
delimiter="|",
quotechar='"',
)
def open_some_data(the_file_name: str) -> List[Dict[str, Any]]:
"""
Open a CSV file and return its contents as a list of dictionaries.
Args:
the_file_name (str): Name of the CSV file to open.
Returns:
List[Dict[str, Any]]: List of rows as dictionaries.
"""
result = open_csv(file_name=the_file_name)
return result
def append_some_data(rows: List[List[str]]) -> None:
"""
Append rows to an existing CSV file.
Args:
rows (List[List[str]]): Rows to append, header must match existing file.
"""
from dsg_lib.common_functions.file_functions import append_csv
append_csv(
file_name="your-file-name.csv",
data=rows,
root_folder="/data",
delimiter="|",
quotechar='"',
)
def delete_example_file(file_name: str) -> None:
"""
Delete a CSV file using dsg_lib's delete_file.
Args:
file_name (str): Name of the file to delete.
"""
from dsg_lib.common_functions.file_functions import delete_file
delete_file(file_name)
def sample_files() -> None:
"""
Create sample files for testing.
"""
filename = "test_sample"
samplesize = 1000
create_sample_files(filename, samplesize)
if __name__ == "__main__":
# Example: Save data to CSV
save_some_data(example_list)
# Example: Open and read data from CSV
opened_file = open_some_data("your-file-name.csv")
print("Opened CSV data:", opened_file)
# Example: Append data to CSV (header must match)
rows_to_append = [
["thing_one", "thing_two"], # header row (must match)
["i", "j"],
["k", "l"],
]
append_some_data(rows_to_append)
# Example: Delete the CSV file
delete_example_file("your-file-name.csv")
# Example: Create sample files
sample_files()
About
About¶
DevSetGo Library is a collection of tools and utilities to help developers with their projects. It has been primarily developed to help with the development of the DevSetGo.com website and other projects in my personal and professional life. It is a work in progress and will be updated as needed.
The driving force behind the library is to limit the amount of boilerplate code that I (Mike Ryan) have to write for each project. Copying code from one project to another causes issues to get fixed in one project, but not always get updated in others. This library is an attempt to fix that issue and make it easier to maintain code across multiple projects.
The library is written in Python and is available on PyPi. It is open source and available on GitHub. Feel free to use it in your projects and contribute to the library.
About Me¶
I am a software engineering manager with an eclectic background in various industries (finance, manufacturing, and metrology). I am passionate about software development and love to learn new things.
Contributing¶
Please feel to contribute to this project. Adding common functions is the intent and if you have one to add or improve an existing it is greatly appreciated.
Ways to Contribute!¶
- Add or improve a function
- Add or improve documentation
- Add or improve Tests
- Report or fix a bug
Changelog¶
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog
Latest Changes¶
Improving SQLAlchemy Queries and Improved Documentation (v2025.4.17.1)¶
Read the Full Documentation here.
What's Changed¶
- fixing sonar issue (#494) @devsetgo
- updating release drafter (#493) @devsetgo
- Improving Examples (#492) @devsetgo
- Fix of issue with distinct queries in read queries (#491) @devsetgo
- updating footer (#490) @devsetgo
- adding raw link to release drafter (#489) @devsetgo
- Improving documentation (#488) @devsetgo
- adding 3.13 testing (#486) @devsetgo
Built to help reduce copy/paste from multiple projects and uses calendar versioning (year.month.day.build) from BumpCalver.
Published Date: 2025 April 17, 15:47
New Functions and Updated Documentation (v2025.4.5.1)¶
What's Changed¶
- Fix of Calendar Version Pattern (#485) @devsetgo
- Working on Improving Documentation and Tests (#474) @devsetgo
- pip(deps): bump pytest-asyncio from 0.25.3 to 0.26.0 (#480) @dependabot[bot]
- pip(deps): bump mkdocstrings[python,shell] from 0.27.0 to 0.29.1 (#481) @dependabot[bot]
- pip(deps): bump black from 24.10.0 to 25.1.0 (#482) @dependabot[bot]
- pip(deps): bump ruff from 0.9.9 to 0.11.2 (#483) @dependabot[bot]
- pip(deps): bump pre-commit from 4.0.1 to 4.2.0 (#484) @dependabot[bot]
- pip(deps): bump pytest-asyncio from 0.25.0 to 0.25.3 (#475) @dependabot[bot]
- pip(deps): bump ruff from 0.9.4 to 0.9.9 (#476) @dependabot[bot]
- pip(deps): bump flake8 from 7.1.1 to 7.1.2 (#477) @dependabot[bot]
- pip(deps): bump twine from 6.0.1 to 6.1.0 (#478) @dependabot[bot]
- pip(deps): bump structlog from 24.4.0 to 25.1.0 (#479) @dependabot[bot]
- pip(deps): bump pymdown-extensions from 10.13 to 10.14.3 (#473) @dependabot[bot]
- pip(deps): bump python-json-logger from 2.0.7 to 3.2.1 (#469) @dependabot[bot]
- pip(deps): bump ruff from 0.8.1 to 0.9.4 (#470) @dependabot[bot]
- pip(deps): bump pytest from 8.3.3 to 8.3.4 (#471) @dependabot[bot]
- pip(deps): bump pygments from 2.18.0 to 2.19.1 (#472) @dependabot[bot]
- pip(deps): bump loguru from 0.7.2 to 0.7.3 (#463) @dependabot[bot]
- pip(deps): bump hatchling from 1.26.3 to 1.27.0 (#464) @dependabot[bot]
- pip(deps): bump bumpcalver from 2024.11.8 to 2024.12.14.1 (#466) @dependabot[bot]
- pip(deps): bump pymdown-extensions from 10.12 to 10.13 (#465) @dependabot[bot]
- pip(deps): bump pytest-asyncio from 0.24.0 to 0.25.0 (#467) @dependabot[bot]
- pip(deps): bump mkdocs-material from 9.5.46 to 9.5.47 (#460) @dependabot[bot]
- pip(deps): bump twine from 5.1.1 to 6.0.1 (#461) @dependabot[bot]
- pip(deps): bump ruff from 0.8.0 to 0.8.1 (#462) @dependabot[bot]
Published Date: 2025 April 05, 20:56
Adding new db functions (v2024.11.28.1)¶
What's Changed¶
- Adding new general execute queries and adding deprecation (#459) @devsetgo
- pip(deps): bump tox from 4.23.0 to 4.23.2 (#455) @dependabot
- pip(deps): bump fastapi[all] from 0.115.2 to 0.115.4 (#454) @dependabot
- pip(deps): bump tqdm from 4.66.5 to 4.66.6 (#456) @dependabot
- pip(deps): bump pymdown-extensions from 10.11.2 to 10.12 (#457) @dependabot
- pip(deps): bump ruff from 0.7.0 to 0.7.1 (#458) @dependabot
Published Date: 2024 November 28, 22:01
Moving to Calendar Versioning (2024.10.20.1)¶
What's Changed¶
- moving to calendar versioning (#453) @devsetgo
- pip(deps): bump tox from 4.21.0 to 4.23.0 (#452) @dependabot
- pip(deps): bump fastapi[all] from 0.114.2 to 0.115.0 (#451) @dependabot
- pip(deps): bump tox from 4.18.1 to 4.21.0 (#450) @dependabot
- pip(deps): bump watchdog from 5.0.2 to 5.0.3 (#449) @dependabot
- pip(deps): bump pylint from 3.2.7 to 3.3.1 (#448) @dependabot
- pip(deps): bump ruff from 0.6.5 to 0.6.8 (#447) @dependabot
Published Date: 2024 October 20, 16:30
Complete Replacement of CX-Oracle for OracleDB (v0.14.4)¶
What's Changed¶
- Remove CX-Oracle for OracleDB cleanup (#446) @devsetgo
- pip(deps): bump pylint from 3.2.6 to 3.2.7 (#442) @dependabot
- pip(deps): bump mkdocs-material from 9.5.33 to 9.5.34 (#443) @dependabot
- github actionts(deps): bump actions/checkout from 2 to 4 (#444) @dependabot
- github actionts(deps): bump actions/setup-python from 2 to 5 (#445) @dependabot
Published Date: 2024 September 15, 15:28
Standard Logging Suppression by Default (v0.14.3)¶
What's Changed¶
- Limit Standard Logging being Displayed (#441) @devsetgo
Published Date: 2024 August 31, 17:33
Improvements and fixes (v0.14.2)¶
What's Changed¶
- Improvements and fixes (#440) @devsetgo
Breaking changes¶
- save_text function no longer adds .txt by default.
- Change from cx-oracle to oracledb
- Improvements to documentation
Published Date: 2024 August 31, 00:02
Adding DB Disconnect (v0.14.1)¶
What's Changed¶
- Adding Database Disconnect (#439) @devsetgo
- pip(deps): bump pre-commit from 3.7.1 to 3.8.0 (#434) @dependabot
- updates to deal with stashing pages (#437) @devsetgo
- working on issue for deployment (#436) @devsetgo
- Adding MKDocs Workflow (#435) @devsetgo
- Version 0.14.0 (#433) @devsetgo
Published Date: 2024 August 25, 18:47
Fix of version for Pypi (v0.14.0-a)¶
What's Changed¶
- Version 0.14.0 (#433) @devsetgo
Published Date: 2024 July 27, 22:40
High Speed Multi-Processing Improvements (v0.14.0)¶
What's Changed¶
- High Speed Logging for Loguru Multi-Processing (#432) @devsetgo
- Resilient Sink Fixes (#431) @devsetgo
- Fix of bug in resilient sink (#430) @devsetgo
- Adding Resiliency to Logging Config (#429) @devsetgo
- pip(deps): bump mkdocs-print-site-plugin from 2.4.1 to 2.5.0 (#422) @dependabot
- pip(deps): bump ruff from 0.4.5 to 0.4.7 (#420) @dependabot
- pip(deps): bump autopep8 from 2.1.1 to 2.2.0 (#421) @dependabot
- pip(deps): bump mkdocs-material from 9.5.24 to 9.5.25 (#423) @dependabot
Published Date: 2024 July 27, 22:28
(v0.13.0-republish)¶
What's Changed¶
Republishing v0.13.0 for pypi.
Published Date: 2024 May 26, 17:13
(v0.13.0)¶
What's Changed¶
- Breaking Change: Removing Limit and Offset from read queries (#419) @devsetgo
Published Date: 2024 May 26, 15:44
Adding missing requirement (v0.12.4)¶
What's Changed¶
- adding missing requirement (#417) @devsetgo
Published Date: 2024 May 16, 14:40
Adding Email Validation (v0.12.3)¶
What's Changed¶
- bump to 0.12.3 (#416) @devsetgo
- Add email validation capabilities (#415) @devsetgo
- pip(deps): bump mkdocs-material from 9.5.20 to 9.5.21 (#414) @dependabot
- pip(deps): bump ruff from 0.4.2 to 0.4.4 (#413) @dependabot
- pip(deps): bump coverage-badge from 1.1.0 to 1.1.1 (#409) @dependabot
- pip(deps): bump mkdocs-material from 9.5.18 to 9.5.20 (#408) @dependabot
- pip(deps): bump mkdocstrings[python,shell] from 0.24.3 to 0.25.0 (#407) @dependabot
- pip(deps): bump ruff from 0.4.1 to 0.4.2 (#410) @dependabot
- pip(deps): bump tqdm from 4.66.2 to 4.66.3 (#412) @dependabot
Published Date: 2024 May 16, 14:19
logging changes (v0.12.2)¶
What's Changed¶
- Logging Changes, Documentation Updates, Using Ruff (#406) @devsetgo
Published Date: 2024 April 22, 16:17
Updates for MetaData and All HTTP Codes (v0.12.1)¶
What's Changed¶
- Enhancements and fixes (#405) @devsetgo
Published Date: 2024 April 19, 18:52
Breaking Change: Base Schema per Database Type (v0.12.0)¶
What's Changed¶
- Adding new base schema for database types (#402) @devsetgo
- creating main release for 0.11.2 (#390) @devsetgo
- Working on bug in read_query (#389) @devsetgo
- Reorganizing Library Structure (#388) @devsetgo
- Python Build and Publish fix (#382) @devsetgo
¶
Breaking Change - SchemaBase is now SchemaBaseSQLite
Published Date: 2024 April 13, 22:55
Reorganizing Library Stucture (v0.11.2-main)¶
What's Changed¶
- creating main release for 0.11.2 (#390) @devsetgo
- Working on bug in read_query (#389) @devsetgo
- Reorganizing Library Structure (#388) @devsetgo
- Python Build and Publish fix (#382) @devsetgo
Breaking Changes are included in this release for import paths. See documents for more information.
Published Date: 2024 February 17, 19:09
Read Query Fix Beta Testing (v0.11.2-beta1)¶
What's Changed¶
- Working on bug in read_query (#389) @devsetgo
- Reorganizing Library Structure (#388) @devsetgo
- Python Build and Publish fix (#382) @devsetgo
Published Date: 2024 February 16, 22:01
Pre-Release to test new structure and publishing (v0.11.2-beta)¶
What's Changed¶
- Reorganizing Library Structure (#388) @devsetgo
- Python Build and Publish fix (#382) @devsetgo
Published Date: 2024 February 10, 21:16
(v0.11.2-fix2)¶
What's Changed¶
- Python Build and Publish fix (#382) @devsetgo
Published Date: 2024 January 21, 15:01
Adding Delete Many and minor fixes (v0.11.2)¶
What's Changed¶
- Adding Delete Many and Other Updates (#381) @devsetgo
- pip(deps): bump mkdocs-material from 9.5.2 to 9.5.3 (#377) @dependabot
- pip(deps): bump fastapi[all] from 0.105.0 to 0.108.0 (#375) @dependabot
- pip(deps): bump sqlalchemy from 2.0.23 to 2.0.24 (#374) @dependabot
- pip(deps): bump pytest from 7.4.3 to 7.4.4 (#373) @dependabot
- pip(deps): bump black from 23.12.0 to 23.12.1 (#376) @dependabot
- github actionts(deps): bump actions/setup-python from 4 to 5 (#378) @dependabot
Published Date: 2024 January 20, 00:07
Breaking Change (v0.11.1)¶
What's Changed¶
- Bump of Version to 0.11.1 (#371) @devsetgo
- Query Improvement (#370) @devsetgo
- 368 get one record should return an empty value when called (#369) @devsetgo
- updating docs from v0.11.0 release (#367) @devsetgo
Published Date: 2023 December 23, 10:49
Full Release of New Features (v0.11.0)¶
What's Changed¶
- Prep for Release (#366) @devsetgo
- Fixing sonar settings (#365) @devsetgo
- Fixes and improvements (#364) @devsetgo
- Dev (#362) @devsetgo
- Fix of issues from Beta release (#361) @devsetgo
- 359 tables are created before create tables is called (#360) @devsetgo
- Change Log (#358) @devsetgo
- fixing latest-changes (#357) @devsetgo
- removing jinja template from Latest Changes Action (#356) @devsetgo
- Action fixing adding main (#355) @devsetgo
- Fixing actions (#354) @devsetgo
- Fixing Beta Publishing issues and Documentation Improvements (#353) @devsetgo
- Update setup.py for sub packages (#352) @devsetgo
- Import Bug Fix (#351) @devsetgo
- Latest Changes Action Fix (#350) @devsetgo
- Next Release (#349) @devsetgo
- Dev (#348) @devsetgo
- pip(deps): bump autopep8 from 2.0.2 to 2.0.4 (#343) @dependabot
- pip(deps): bump wheel from 0.41.2 to 0.42.0 (#345) @dependabot
- pip(deps): bump mkdocstrings[python] from 0.21.2 to 0.24.0 (#346) @dependabot
- pip(deps): bump mkdocs from 1.4.3 to 1.5.3 (#347) @dependabot
- pip(deps): bump flake8 from 6.0.0 to 6.1.0 (#332) @dependabot
- pip(deps): bump click from 8.1.3 to 8.1.7 (#337) @dependabot
- pip(deps): bump wheel from 0.40.0 to 0.41.2 (#339) @dependabot
- github actionts(deps): bump actions/checkout from 2 to 4 (#340) @dependabot
- pip(deps): bump mkdocs-material from 9.1.17 to 9.4.2 (#341) @dependabot
- pip(deps): bump black from 23.3.0 to 23.9.1 (#342) @dependabot
- pip(deps): bump mkdocs-material from 9.1.15 to 9.1.17 (#326) @dependabot
- pip(deps): bump pytest from 7.3.1 to 7.4.0 (#327) @dependabot
- pip(deps): bump mkdocs from 1.4.2 to 1.4.3 (#328) @dependabot
- pip(deps): bump autoflake from 2.1.1 to 2.2.0 (#329) @dependabot
- pip(deps): bump pre-commit from 3.2.2 to 3.3.3 (#330) @dependabot
- pip(deps): bump mkdocs-material from 9.1.9 to 9.1.15 (#325) @dependabot
- pip(deps): bump autoflake from 2.0.2 to 2.1.1 (#324) @dependabot
- pip(deps): bump pytest-xdist from 3.2.1 to 3.3.1 (#323) @dependabot
- pip(deps): bump tox from 4.4.11 to 4.5.2 (#322) @dependabot
- pip(deps): bump pytest-cov from 4.0.0 to 4.1.0 (#321) @dependabot
- pip(deps): bump loguru from 0.6.0 to 0.7.0 (#317) @dependabot
- pip(deps): bump mkdocs-gen-files from 0.4.0 to 0.5.0 (#314) @dependabot
- pip(deps): bump pylint from 2.17.2 to 2.17.4 (#319) @dependabot
- pip(deps): bump mkdocs-material from 9.1.6 to 9.1.9 (#320) @dependabot
- pip(deps): bump pytest from 7.3.0 to 7.3.1 (#318) @dependabot
Published Date: 2023 December 17, 22:00
Beta Release with fixes for multiple issues (v0.11.0-beta3-fix1)¶
What's Changed¶
- Dev (#362) @devsetgo
- Fix of issues from Beta release (#361) @devsetgo
- 359 tables are created before create tables is called (#360) @devsetgo
- Change Log (#358) @devsetgo
- fixing latest-changes (#357) @devsetgo
- removing jinja template from Latest Changes Action (#356) @devsetgo
- Action fixing adding main (#355) @devsetgo
- Fixing actions (#354) @devsetgo
- Fixing Beta Publishing issues and Documentation Improvements (#353) @devsetgo
- Update setup.py for sub packages (#352) @devsetgo
- Import Bug Fix (#351) @devsetgo
- Latest Changes Action Fix (#350) @devsetgo
- Next Release (#349) @devsetgo
- Dev (#348) @devsetgo
- pip(deps): bump autopep8 from 2.0.2 to 2.0.4 (#343) @dependabot
- pip(deps): bump wheel from 0.41.2 to 0.42.0 (#345) @dependabot
- pip(deps): bump mkdocstrings[python] from 0.21.2 to 0.24.0 (#346) @dependabot
- pip(deps): bump mkdocs from 1.4.3 to 1.5.3 (#347) @dependabot
- pip(deps): bump flake8 from 6.0.0 to 6.1.0 (#332) @dependabot
- pip(deps): bump click from 8.1.3 to 8.1.7 (#337) @dependabot
- pip(deps): bump wheel from 0.40.0 to 0.41.2 (#339) @dependabot
- github actionts(deps): bump actions/checkout from 2 to 4 (#340) @dependabot
- pip(deps): bump mkdocs-material from 9.1.17 to 9.4.2 (#341) @dependabot
- pip(deps): bump black from 23.3.0 to 23.9.1 (#342) @dependabot
- pip(deps): bump mkdocs-material from 9.1.15 to 9.1.17 (#326) @dependabot
- pip(deps): bump pytest from 7.3.1 to 7.4.0 (#327) @dependabot
- pip(deps): bump mkdocs from 1.4.2 to 1.4.3 (#328) @dependabot
- pip(deps): bump autoflake from 2.1.1 to 2.2.0 (#329) @dependabot
- pip(deps): bump pre-commit from 3.2.2 to 3.3.3 (#330) @dependabot
- pip(deps): bump mkdocs-material from 9.1.9 to 9.1.15 (#325) @dependabot
- pip(deps): bump autoflake from 2.0.2 to 2.1.1 (#324) @dependabot
- pip(deps): bump pytest-xdist from 3.2.1 to 3.3.1 (#323) @dependabot
- pip(deps): bump tox from 4.4.11 to 4.5.2 (#322) @dependabot
- pip(deps): bump pytest-cov from 4.0.0 to 4.1.0 (#321) @dependabot
- pip(deps): bump loguru from 0.6.0 to 0.7.0 (#317) @dependabot
- pip(deps): bump mkdocs-gen-files from 0.4.0 to 0.5.0 (#314) @dependabot
- pip(deps): bump pylint from 2.17.2 to 2.17.4 (#319) @dependabot
- pip(deps): bump mkdocs-material from 9.1.6 to 9.1.9 (#320) @dependabot
- pip(deps): bump pytest from 7.3.0 to 7.3.1 (#318) @dependabot
Published Date: 2023 December 17, 16:23
Fixing AsyncDatabase create tables (v0.11.0-beta3)¶
What's Changed¶
- Fix of issues from Beta release (#361) @devsetgo
- 359 tables are created before create tables is called (#360) @devsetgo
- Change Log (#358) @devsetgo
- fixing latest-changes (#357) @devsetgo
- removing jinja template from Latest Changes Action (#356) @devsetgo
- Action fixing adding main (#355) @devsetgo
- Fixing actions (#354) @devsetgo
- Fixing Beta Publishing issues and Documentation Improvements (#353) @devsetgo
- Update setup.py for sub packages (#352) @devsetgo
- Import Bug Fix (#351) @devsetgo
- Latest Changes Action Fix (#350) @devsetgo
- Next Release (#349) @devsetgo
- Dev (#348) @devsetgo
- pip(deps): bump autopep8 from 2.0.2 to 2.0.4 (#343) @dependabot
- pip(deps): bump wheel from 0.41.2 to 0.42.0 (#345) @dependabot
- pip(deps): bump mkdocstrings[python] from 0.21.2 to 0.24.0 (#346) @dependabot
- pip(deps): bump mkdocs from 1.4.3 to 1.5.3 (#347) @dependabot
- pip(deps): bump flake8 from 6.0.0 to 6.1.0 (#332) @dependabot
- pip(deps): bump click from 8.1.3 to 8.1.7 (#337) @dependabot
- pip(deps): bump wheel from 0.40.0 to 0.41.2 (#339) @dependabot
- github actionts(deps): bump actions/checkout from 2 to 4 (#340) @dependabot
- pip(deps): bump mkdocs-material from 9.1.17 to 9.4.2 (#341) @dependabot
- pip(deps): bump black from 23.3.0 to 23.9.1 (#342) @dependabot
- pip(deps): bump mkdocs-material from 9.1.15 to 9.1.17 (#326) @dependabot
- pip(deps): bump pytest from 7.3.1 to 7.4.0 (#327) @dependabot
- pip(deps): bump mkdocs from 1.4.2 to 1.4.3 (#328) @dependabot
- pip(deps): bump autoflake from 2.1.1 to 2.2.0 (#329) @dependabot
- pip(deps): bump pre-commit from 3.2.2 to 3.3.3 (#330) @dependabot
- pip(deps): bump mkdocs-material from 9.1.9 to 9.1.15 (#325) @dependabot
- pip(deps): bump autoflake from 2.0.2 to 2.1.1 (#324) @dependabot
- pip(deps): bump pytest-xdist from 3.2.1 to 3.3.1 (#323) @dependabot
- pip(deps): bump tox from 4.4.11 to 4.5.2 (#322) @dependabot
- pip(deps): bump pytest-cov from 4.0.0 to 4.1.0 (#321) @dependabot
- pip(deps): bump loguru from 0.6.0 to 0.7.0 (#317) @dependabot
- pip(deps): bump mkdocs-gen-files from 0.4.0 to 0.5.0 (#314) @dependabot
- pip(deps): bump pylint from 2.17.2 to 2.17.4 (#319) @dependabot
- pip(deps): bump mkdocs-material from 9.1.6 to 9.1.9 (#320) @dependabot
- pip(deps): bump pytest from 7.3.0 to 7.3.1 (#318) @dependabot
Published Date: 2023 December 17, 16:18
Build Updates (v0.11.0-beta2)¶
What's Changed¶
- Change Log (#358) @devsetgo
- fixing latest-changes (#357) @devsetgo
- removing jinja template from Latest Changes Action (#356) @devsetgo
- Action fixing adding main (#355) @devsetgo
- Fixing actions (#354) @devsetgo
- Fixing Beta Publishing issues and Documentation Improvements (#353) @devsetgo
- Update setup.py for sub packages (#352) @devsetgo
- Import Bug Fix (#351) @devsetgo
- Latest Changes Action Fix (#350) @devsetgo
- Next Release (#349) @devsetgo
- Dev (#348) @devsetgo
- pip(deps): bump autopep8 from 2.0.2 to 2.0.4 (#343) @dependabot
- pip(deps): bump wheel from 0.41.2 to 0.42.0 (#345) @dependabot
- pip(deps): bump mkdocstrings[python] from 0.21.2 to 0.24.0 (#346) @dependabot
- pip(deps): bump mkdocs from 1.4.3 to 1.5.3 (#347) @dependabot
- pip(deps): bump flake8 from 6.0.0 to 6.1.0 (#332) @dependabot
- pip(deps): bump click from 8.1.3 to 8.1.7 (#337) @dependabot
- pip(deps): bump wheel from 0.40.0 to 0.41.2 (#339) @dependabot
- github actionts(deps): bump actions/checkout from 2 to 4 (#340) @dependabot
- pip(deps): bump mkdocs-material from 9.1.17 to 9.4.2 (#341) @dependabot
- pip(deps): bump black from 23.3.0 to 23.9.1 (#342) @dependabot
- pip(deps): bump mkdocs-material from 9.1.15 to 9.1.17 (#326) @dependabot
- pip(deps): bump pytest from 7.3.1 to 7.4.0 (#327) @dependabot
- pip(deps): bump mkdocs from 1.4.2 to 1.4.3 (#328) @dependabot
- pip(deps): bump autoflake from 2.1.1 to 2.2.0 (#329) @dependabot
- pip(deps): bump pre-commit from 3.2.2 to 3.3.3 (#330) @dependabot
- pip(deps): bump mkdocs-material from 9.1.9 to 9.1.15 (#325) @dependabot
- pip(deps): bump autoflake from 2.0.2 to 2.1.1 (#324) @dependabot
- pip(deps): bump pytest-xdist from 3.2.1 to 3.3.1 (#323) @dependabot
- pip(deps): bump tox from 4.4.11 to 4.5.2 (#322) @dependabot
- pip(deps): bump pytest-cov from 4.0.0 to 4.1.0 (#321) @dependabot
- pip(deps): bump loguru from 0.6.0 to 0.7.0 (#317) @dependabot
- pip(deps): bump mkdocs-gen-files from 0.4.0 to 0.5.0 (#314) @dependabot
- pip(deps): bump pylint from 2.17.2 to 2.17.4 (#319) @dependabot
- pip(deps): bump mkdocs-material from 9.1.6 to 9.1.9 (#320) @dependabot
- pip(deps): bump pytest from 7.3.0 to 7.3.1 (#318) @dependabot
Published Date: 2023 December 16, 20:34
Beta Release with fixes for multiple issues (v0.11.0-beta1-fix5)¶
What's Changed¶
- fixing latest-changes (#357) @devsetgo
- removing jinja template from Latest Changes Action (#356) @devsetgo
- Action fixing adding main (#355) @devsetgo
- Fixing actions (#354) @devsetgo
- Fixing Beta Publishing issues and Documentation Improvements (#353) @devsetgo
- Update setup.py for sub packages (#352) @devsetgo
- Import Bug Fix (#351) @devsetgo
- Latest Changes Action Fix (#350) @devsetgo
- Next Release (#349) @devsetgo
- Dev (#348) @devsetgo
- pip(deps): bump autopep8 from 2.0.2 to 2.0.4 (#343) @dependabot
- pip(deps): bump wheel from 0.41.2 to 0.42.0 (#345) @dependabot
- pip(deps): bump mkdocstrings[python] from 0.21.2 to 0.24.0 (#346) @dependabot
- pip(deps): bump mkdocs from 1.4.3 to 1.5.3 (#347) @dependabot
- pip(deps): bump flake8 from 6.0.0 to 6.1.0 (#332) @dependabot
- pip(deps): bump click from 8.1.3 to 8.1.7 (#337) @dependabot
- pip(deps): bump wheel from 0.40.0 to 0.41.2 (#339) @dependabot
- github actionts(deps): bump actions/checkout from 2 to 4 (#340) @dependabot
- pip(deps): bump mkdocs-material from 9.1.17 to 9.4.2 (#341) @dependabot
- pip(deps): bump black from 23.3.0 to 23.9.1 (#342) @dependabot
- pip(deps): bump mkdocs-material from 9.1.15 to 9.1.17 (#326) @dependabot
- pip(deps): bump pytest from 7.3.1 to 7.4.0 (#327) @dependabot
- pip(deps): bump mkdocs from 1.4.2 to 1.4.3 (#328) @dependabot
- pip(deps): bump autoflake from 2.1.1 to 2.2.0 (#329) @dependabot
- pip(deps): bump pre-commit from 3.2.2 to 3.3.3 (#330) @dependabot
- pip(deps): bump mkdocs-material from 9.1.9 to 9.1.15 (#325) @dependabot
- pip(deps): bump autoflake from 2.0.2 to 2.1.1 (#324) @dependabot
- pip(deps): bump pytest-xdist from 3.2.1 to 3.3.1 (#323) @dependabot
- pip(deps): bump tox from 4.4.11 to 4.5.2 (#322) @dependabot
- pip(deps): bump pytest-cov from 4.0.0 to 4.1.0 (#321) @dependabot
- pip(deps): bump loguru from 0.6.0 to 0.7.0 (#317) @dependabot
- pip(deps): bump mkdocs-gen-files from 0.4.0 to 0.5.0 (#314) @dependabot
- pip(deps): bump pylint from 2.17.2 to 2.17.4 (#319) @dependabot
- pip(deps): bump mkdocs-material from 9.1.6 to 9.1.9 (#320) @dependabot
- pip(deps): bump pytest from 7.3.0 to 7.3.1 (#318) @dependabot
Published Date: 2023 December 16, 16:33
Build Fixes (v0.11.0-beta1-fix4)¶
What's Changed¶
- Update setup.py for sub packages (#352) @devsetgo
- Import Bug Fix (#351) @devsetgo
- Latest Changes Action Fix (#350) @devsetgo
- Next Release (#349) @devsetgo
- Dev (#348) @devsetgo
- pip(deps): bump autopep8 from 2.0.2 to 2.0.4 (#343) @dependabot
- pip(deps): bump wheel from 0.41.2 to 0.42.0 (#345) @dependabot
- pip(deps): bump mkdocstrings[python] from 0.21.2 to 0.24.0 (#346) @dependabot
- pip(deps): bump mkdocs from 1.4.3 to 1.5.3 (#347) @dependabot
- pip(deps): bump flake8 from 6.0.0 to 6.1.0 (#332) @dependabot
- pip(deps): bump click from 8.1.3 to 8.1.7 (#337) @dependabot
- pip(deps): bump wheel from 0.40.0 to 0.41.2 (#339) @dependabot
- github actionts(deps): bump actions/checkout from 2 to 4 (#340) @dependabot
- pip(deps): bump mkdocs-material from 9.1.17 to 9.4.2 (#341) @dependabot
- pip(deps): bump black from 23.3.0 to 23.9.1 (#342) @dependabot
- pip(deps): bump mkdocs-material from 9.1.15 to 9.1.17 (#326) @dependabot
- pip(deps): bump pytest from 7.3.1 to 7.4.0 (#327) @dependabot
- pip(deps): bump mkdocs from 1.4.2 to 1.4.3 (#328) @dependabot
- pip(deps): bump autoflake from 2.1.1 to 2.2.0 (#329) @dependabot
- pip(deps): bump pre-commit from 3.2.2 to 3.3.3 (#330) @dependabot
- pip(deps): bump mkdocs-material from 9.1.9 to 9.1.15 (#325) @dependabot
- pip(deps): bump autoflake from 2.0.2 to 2.1.1 (#324) @dependabot
- pip(deps): bump pytest-xdist from 3.2.1 to 3.3.1 (#323) @dependabot
- pip(deps): bump tox from 4.4.11 to 4.5.2 (#322) @dependabot
- pip(deps): bump pytest-cov from 4.0.0 to 4.1.0 (#321) @dependabot
- pip(deps): bump loguru from 0.6.0 to 0.7.0 (#317) @dependabot
- pip(deps): bump mkdocs-gen-files from 0.4.0 to 0.5.0 (#314) @dependabot
- pip(deps): bump pylint from 2.17.2 to 2.17.4 (#319) @dependabot
- pip(deps): bump mkdocs-material from 9.1.6 to 9.1.9 (#320) @dependabot
- pip(deps): bump pytest from 7.3.0 to 7.3.1 (#318) @dependabot
Published Date: 2023 December 12, 11:45
Async Database and FastAPI functions (v0.11.0-beta0)¶
What's Changed¶
- Dev (#348) @devsetgo - New functionality and documentation for FastAPI Endpoints and Async Database Functionality
- pip(deps): bump autopep8 from 2.0.2 to 2.0.4 (#343) @dependabot
- pip(deps): bump wheel from 0.41.2 to 0.42.0 (#345) @dependabot
- pip(deps): bump mkdocstrings[python] from 0.21.2 to 0.24.0 (#346) @dependabot
- pip(deps): bump mkdocs from 1.4.3 to 1.5.3 (#347) @dependabot
- pip(deps): bump flake8 from 6.0.0 to 6.1.0 (#332) @dependabot
- pip(deps): bump click from 8.1.3 to 8.1.7 (#337) @dependabot
- pip(deps): bump wheel from 0.40.0 to 0.41.2 (#339) @dependabot
- github actionts(deps): bump actions/checkout from 2 to 4 (#340) @dependabot
- pip(deps): bump mkdocs-material from 9.1.17 to 9.4.2 (#341) @dependabot
- pip(deps): bump black from 23.3.0 to 23.9.1 (#342) @dependabot
- pip(deps): bump mkdocs-material from 9.1.15 to 9.1.17 (#326) @dependabot
- pip(deps): bump pytest from 7.3.1 to 7.4.0 (#327) @dependabot
- pip(deps): bump mkdocs from 1.4.2 to 1.4.3 (#328) @dependabot
- pip(deps): bump autoflake from 2.1.1 to 2.2.0 (#329) @dependabot
- pip(deps): bump pre-commit from 3.2.2 to 3.3.3 (#330) @dependabot
- pip(deps): bump mkdocs-material from 9.1.9 to 9.1.15 (#325) @dependabot
- pip(deps): bump autoflake from 2.0.2 to 2.1.1 (#324) @dependabot
- pip(deps): bump pytest-xdist from 3.2.1 to 3.3.1 (#323) @dependabot
- pip(deps): bump tox from 4.4.11 to 4.5.2 (#322) @dependabot
- pip(deps): bump pytest-cov from 4.0.0 to 4.1.0 (#321) @dependabot
- pip(deps): bump loguru from 0.6.0 to 0.7.0 (#317) @dependabot
- pip(deps): bump mkdocs-gen-files from 0.4.0 to 0.5.0 (#314) @dependabot
- pip(deps): bump pylint from 2.17.2 to 2.17.4 (#319) @dependabot
- pip(deps): bump mkdocs-material from 9.1.6 to 9.1.9 (#320) @dependabot
- pip(deps): bump pytest from 7.3.0 to 7.3.1 (#318) @dependabot
Published Date: 2023 December 10, 20:17
Pattern Analysis Update and Bug Fix (v0.10.1)¶
What's Changed¶
- Improvement to the patterns analysis (#313) @devsetgo
- pip(deps): bump mkdocs-material from 9.1.3 to 9.1.5 (#308) @dependabot
- pip(deps): bump pre-commit from 3.2.0 to 3.2.1 (#310) @dependabot
- pip(deps): bump watchdog from 2.3.1 to 3.0.0 (#309) @dependabot
- pip(deps): bump pylint from 2.17.0 to 2.17.1 (#311) @dependabot
- pip(deps): bump tox from 4.4.7 to 4.4.8 (#312) @dependabot
Published Date: 2023 April 08, 21:45
ChatGPT Driven Improvements (v0.10.0)¶
ChatGPT¶
Using ChatGPT to improve tests, find bugs, and improve performance. Code coverage is at 100% and the code base appears to be performing better than before.
Major changes are in PR #304
What's Changed¶
- latest change fix for regex pattern. (#307) @devsetgo
- Dev (#306) @devsetgo
- Workflow changes (#305) @devsetgo
- ChatGPT Driven Improvements (#304) @devsetgo
- pip(deps): bump pre-commit from 3.0.2 to 3.1.1 (#300) @dependabot
- pip(deps): bump pytest-xdist from 3.1.0 to 3.2.0 (#302) @dependabot
- pip(deps): bump autoflake from 2.0.0 to 2.0.1 (#299) @dependabot
- pip(deps): bump watchdog from 2.1.9 to 2.3.1 (#301) @dependabot
- pip(deps): bump pytest from 7.2.0 to 7.2.1 (#303) @dependabot
- pip(deps): bump pylint from 2.15.7 to 2.16.1 (#298) @dependabot
- pip(deps): bump autopep8 from 2.0.0 to 2.0.1 (#289) @dependabot
- pip(deps): bump pylint from 2.15.7 to 2.15.10 (#295) @dependabot
- pip(deps): bump black from 22.10.0 to 23.1.0 (#294) @dependabot
- pip(deps): bump tox from 3.27.1 to 4.4.4 (#296) @dependabot
- pip(deps): bump pre-commit from 2.20.0 to 3.0.2 (#297) @dependabot
Published Date: 2023 April 01, 00:27
Open CSV enhancements and library updates (v0.9.0)¶
What's Changed¶
- fix of latest changes (#288) @devsetgo
- Open_CSV Enhancements (#287) @devsetgo
- pip(deps): bump pytest-cov from 3.0.0 to 4.0.0 (#274) @dependabot
- pip(deps): bump mkdocs-material from 8.4.2 to 8.5.5 (#276) @dependabot
- pip(deps): bump autoflake from 1.5.3 to 1.6.1 (#275) @dependabot
- pip(deps): bump tqdm from 4.64.0 to 4.64.1 (#273) @dependabot
- pip(deps): bump pytest from 7.1.2 to 7.1.3 (#272) @dependabot
- pip(deps): bump mkdocs from 1.3.1 to 1.4.0 (#271) @dependabot
- pip(deps): bump tox from 3.25.1 to 3.26.0 (#269) @dependabot
- pip(deps): bump pylint from 2.15.0 to 2.15.3 (#270) @dependabot
- pip(deps): bump mkdocs-material from 8.3.9 to 8.4.2 (#268) @dependabot
- pip(deps): bump autopep8 from 1.6.0 to 1.7.0 (#264) @dependabot
- pip(deps): bump pylint from 2.14.5 to 2.15.0 (#265) @dependabot
- pip(deps): bump autoflake from 1.4 to 1.5.3 (#263) @dependabot
- pip(deps): bump black from 22.6.0 to 22.8.0 (#267) @dependabot
- pip(deps): bump flake8 from 5.0.1 to 5.0.4 (#266) @dependabot
- pip(deps): bump pre-commit from 2.19.0 to 2.20.0 (#260) @dependabot
- pip(deps): bump mkdocs from 1.3.0 to 1.3.1 (#261) @dependabot
- pip(deps): bump flake8 from 4.0.1 to 5.0.1 (#259) @dependabot
- pip(deps): bump mkdocs-material from 8.3.8 to 8.3.9 (#258) @dependabot
- pip(deps): bump pylint from 2.14.4 to 2.14.5 (#262) @dependabot
- pip(deps): bump twine from 4.0.0 to 4.0.1 (#252) @dependabot
- pip(deps): bump pylint from 2.14.0 to 2.14.4 (#251) @dependabot
- pip(deps): bump mkdocs-material from 8.2.16 to 8.3.8 (#253) @dependabot
- pip(deps): bump black from 22.3.0 to 22.6.0 (#254) @dependabot
- pip(deps): bump tox from 3.25.0 to 3.25.1 (#255) @dependabot
- pip(deps): bump watchdog from 2.1.8 to 2.1.9 (#256) @dependabot
- github actionts(deps): bump actions/setup-python from 3 to 4 (#257) @dependabot
- pip(deps): bump pylint from 2.13.7 to 2.14.0 (#250) @dependabot
- pip(deps): bump watchdog from 2.1.7 to 2.1.8 (#246) @dependabot
- pip(deps): bump pre-commit from 2.18.1 to 2.19.0 (#248) @dependabot
- pip(deps): bump mkdocs-material from 8.2.12 to 8.2.16 (#249) @dependabot
- pip(deps): bump tox from 3.24.5 to 3.25.0 (#242) @dependabot
- pip(deps): bump pre-commit from 2.17.0 to 2.18.1 (#243) @dependabot
- pip(deps): bump click from 8.1.2 to 8.1.3 (#245) @dependabot
- pip(deps): bump pylint from 2.13.4 to 2.13.7 (#240) @dependabot
- pip(deps): bump tqdm from 4.63.1 to 4.64.0 (#244) @dependabot
- pip(deps): bump mkdocs-material from 8.2.8 to 8.2.12 (#241) @dependabot
- pip(deps): bump pytest from 7.1.1 to 7.1.2 (#239) @dependabot
- pip(deps): bump watchdog from 2.1.6 to 2.1.7 (#238) @dependabot
- pip(deps): bump pylint from 2.12.2 to 2.13.4 (#237) @dependabot
- pip(deps): bump mkdocs from 1.2.3 to 1.3.0 (#234) @dependabot
- pip(deps): bump tqdm from 4.63.0 to 4.63.1 (#233) @dependabot
- pip(deps): bump black from 22.1.0 to 22.3.0 (#236) @dependabot
- pip(deps): bump pytest from 7.0.1 to 7.1.1 (#231) @dependabot
- pip(deps): bump click from 8.0.4 to 8.1.2 (#235) @dependabot
- pip(deps): bump mkdocs-material from 8.2.5 to 8.2.8 (#232) @dependabot
- pip(deps): bump twine from 3.8.0 to 4.0.0 (#230) @dependabot
- document updates (#229) @devsetgo
Published Date: 2022 December 04, 16:55
Additional Logging Configuration (v0.8.0)¶
What's Changed¶
- New Logging Configuration items (#228) @devsetgo
- pip(deps): bump tqdm from 4.62.3 to 4.63.0 (#224) @dependabot
- pip(deps): bump mkdocs-material from 8.2.3 to 8.2.4 (#227) @dependabot
- github actionts(deps): bump actions/setup-python from 2.3.1 to 3 (#226) @dependabot
- pip(deps): bump mkdocs-material from 8.1.9 to 8.2.3 (#225) @dependabot
- pip(deps): bump twine from 3.7.1 to 3.8.0 (#223) @dependabot
- pip(deps): bump pytest from 6.2.5 to 7.0.1 (#222) @dependabot
- pip(deps): bump pytest-runner from 5.3.1 to 6.0.0 (#221) @dependabot
- pip(deps): bump loguru from 0.5.3 to 0.6.0 (#218) @dependabot
- pip(deps): bump black from 21.12b0 to 22.1.0 (#219) @dependabot
- pip(deps): bump mkdocs-material from 8.1.8 to 8.1.9 (#220) @dependabot
Published Date: 2022 March 12, 21:07
(v0.7.1)¶
What's Changed¶
- Bump version: 0.7.0 → 0.7.1 (#217) @devsetgo
- Hotfix for setup file (#216) @devsetgo
Published Date: 2022 January 29, 01:51
Logging to Beta Testing (v0.7.0)¶
Logging is now has basic unit tests and is more ready to use with live application.
What's Changed¶
- Adding Logging Config (#215) @devsetgo
- pip(deps): bump pre-commit from 2.15.0 to 2.16.0 (#210) @dependabot
- pip(deps): bump pylint from 2.12.1 to 2.12.2 (#211) @dependabot
- pip(deps): bump tox from 3.24.4 to 3.24.5 (#212) @dependabot
- pip(deps): bump black from 21.11b1 to 21.12b0 (#213) @dependabot
- pip(deps): bump twine from 3.6.0 to 3.7.1 (#214) @dependabot
- pip(deps): bump twine from 3.5.0 to 3.6.0 (#204) @dependabot
- pip(deps): bump coverage-badge from 1.0.2 to 1.1.0 (#205) @dependabot
- pip(deps): bump mkdocs-material from 7.3.6 to 8.0.2 (#206) @dependabot
- pip(deps): bump pylint from 2.11.1 to 2.12.1 (#207) @dependabot
- pip(deps): bump black from 21.10b0 to 21.11b1 (#208) @dependabot
- github actionts(deps): bump actions/setup-python from 2.2.2 to 2.3.1 (#209) @dependabot
- Dev (#203) @devsetgo
- pip(deps): bump tox from 3.24.3 to 3.24.4 (#193) @dependabot
- pip(deps): bump tqdm from 4.62.2 to 4.62.3 (#194) @dependabot
- pip(deps): bump pylint from 2.10.2 to 2.11.1 (#195) @dependabot
- pip(deps): bump mkdocs-material from 7.2.6 to 7.3.0 (#196) @dependabot
- pip(deps): bump black from 21.8b0 to 21.9b0 (#197) @dependabot
- pip(deps): bump mkdocs-material from 7.2.4 to 7.2.6 (#189) @dependabot
- pip(deps): bump pytest from 6.2.4 to 6.2.5 (#191) @dependabot
- pip(deps): bump watchdog from 2.1.3 to 2.1.5 (#192) @dependabot
- pip(deps): bump tox from 3.24.1 to 3.24.3 (#190) @dependabot
- pip(deps): bump pre-commit from 2.14.0 to 2.15.0 (#188) @dependabot
- pip(deps): bump black from 21.7b0 to 21.8b0 (#187) @dependabot
- pip(deps): bump pylint from 2.9.6 to 2.10.2 (#184) @dependabot
- pip(deps): bump tqdm from 4.62.0 to 4.62.2 (#185) @dependabot
- github actionts(deps): bump actions/setup-python from 1 to 2.2.2 (#182) @dependabot
- Bump wheel from 0.36.2 to 0.37.0 (#180) @dependabot
- Bump mkdocs-material from 7.2.2 to 7.2.4 (#181) @dependabot
- Bump tox from 3.24.0 to 3.24.1 (#177) @dependabot
- Bump mkdocs-material from 7.2.1 to 7.2.2 (#178) @dependabot
- Bump pre-commit from 2.13.0 to 2.14.0 (#179) @dependabot
- Bump pylint from 2.9.5 to 2.9.6 (#176) @dependabot
- Bump tqdm from 4.61.2 to 4.62.0 (#175) @dependabot
- Bump mkdocs-material from 7.1.10 to 7.2.1 (#174) @dependabot
- Bump twine from 3.4.1 to 3.4.2 (#171) @dependabot
- Bump pylint from 2.9.3 to 2.9.5 (#170) @dependabot
- Bump mkdocs from 1.2.1 to 1.2.2 (#173) @dependabot
- documentation update (#169) @devsetgo
- README fix (#168) @devsetgo
Published Date: 2022 January 29, 01:42
Logging Configuration (v0.6.0)¶
What's Changed¶
- Adding Logging and Cleanup (#167) @devsetgo
- Bump tqdm from 4.61.1 to 4.61.2 (#166) @dependabot
- Bump pylint from 2.8.3 to 2.9.3 (#165) @dependabot
- Bump watchdog from 2.1.2 to 2.1.3 (#164) @dependabot
- Bump mkdocs-material from 7.1.8 to 7.1.9 (#163) @dependabot
- Bump tqdm from 4.61.0 to 4.61.1 (#162) @dependabot
- Bump mkdocs-material from 7.1.7 to 7.1.8 (#161) @dependabot
- Bump mkdocs from 1.1.2 to 1.2.1 (#159) @dependabot
- Bump black from 21.5b2 to 21.6b0 (#158) @dependabot
- Bump mkdocs-material from 7.1.6 to 7.1.7 (#160) @dependabot
- Bump pytest-cov from 2.12.0 to 2.12.1 (#154) @dependabot
- Bump pylint from 2.8.2 to 2.8.3 (#155) @dependabot
- Bump black from 21.5b1 to 21.5b2 (#156) @dependabot
- Bump mkdocs-material from 7.1.5 to 7.1.6 (#157) @dependabot
- Bump tqdm from 4.60.0 to 4.61.0 (#153) @dependabot
- Bump pre-commit from 2.12.1 to 2.13.0 (#151) @dependabot
- Bump pytest-runner from 5.3.0 to 5.3.1 (#152) @dependabot
- Bump mkdocs-material from 7.1.4 to 7.1.5 (#150) @dependabot
- Bump watchdog from 2.1.1 to 2.1.2 (#149) @dependabot
- Bump click from 7.1.2 to 8.0.1 (#148) @dependabot
- Bump black from 21.5b0 to 21.5b1 (#147) @dependabot
- Bump watchdog from 2.1.0 to 2.1.1 (#146) @dependabot
- Bump pytest-cov from 2.11.1 to 2.12.0 (#145) @dependabot
- Bump flake8 from 3.9.1 to 3.9.2 (#143) @dependabot
- Bump pytest from 6.2.3 to 6.2.4 (#139) @dependabot
- Bump watchdog from 2.0.3 to 2.1.0 (#138) @dependabot
- Bump black from 21.4b2 to 21.5b0 (#140) @dependabot
- Bump mkdocs-material from 7.1.3 to 7.1.4 (#141) @dependabot
- Dev (#142) @devsetgo
- Bump tox from 3.23.0 to 3.23.1 (#137) @dependabot
- Bump autopep8 from 1.5.6 to 1.5.7 (#136) @dependabot
- Bump pylint from 2.7.4 to 2.8.2 (#135) @dependabot
- Bump black from 20.8b1 to 21.4b2 (#134) @dependabot
- Bump mkdocs-material from 7.1.2 to 7.1.3 (#133) @dependabot
- Adding SonarCloud Code Coverage (#130) @devsetgo
- Bump mkdocs-material from 7.1.1 to 7.1.2 (#132) @dependabot
- Bump watchdog from 2.0.2 to 2.0.3 (#131) @dependabot
- Bump pre-commit from 2.12.0 to 2.12.1 (#129) @dependabot
- Bump flake8 from 3.9.0 to 3.9.1 (#128) @dependabot
- Bump mkdocs-material from 7.1.0 to 7.1.1 (#127) @dependabot
- Bump tqdm from 4.59.0 to 4.60.0 (#124) @dependabot
- Bump pytest from 6.2.2 to 6.2.3 (#125) @dependabot
- Bump pre-commit from 2.11.1 to 2.12.0 (#126) @dependabot
- Bump pylint from 2.7.2 to 2.7.4 (#122) @dependabot
- Bump mkdocs-material from 7.0.6 to 7.1.0 (#123) @dependabot
- Bump mkdocs-material from 7.0.5 to 7.0.6 (#121) @dependabot
- Bump flake8 from 3.8.4 to 3.9.0 (#120) @dependabot
- Bump twine from 3.3.0 to 3.4.1 (#118) @dependabot
- Bump autopep8 from 1.5.5 to 1.5.6 (#119) @dependabot
Published Date: 2021 July 16, 23:44
Fixing Publish (v0.5.0-2)¶
What's Changed¶
- adding update for publish (#117) @devsetgo
Published Date: 2021 March 18, 17:19
Calendar and RegEx Function + Documentation (v0.5.0)¶
What's Changed¶
- Adding Calendar Functions (#116) @devsetgo
- Bump pre-commit from 2.10.1 to 2.11.1 (#113) @dependabot
- update to Saturday (#115) @devsetgo
- Bump tqdm from 4.58.0 to 4.59.0 (#112) @dependabot
- Bump mkdocs-material from 7.0.4 to 7.0.5 (#114) @dependabot
- fixes for mkdoc material update (#111) @devsetgo
- Bump tox from 3.22.0 to 3.23.0 (#109) @dependabot
- Bump mkdocs-material from 7.0.2 to 7.0.4 (#108) @dependabot
- Bump pylint from 2.7.1 to 2.7.2 (#107) @dependabot
- Bump coverage from 5.4 to 5.5 (#110) @dependabot
- Bump pylint from 2.6.2 to 2.7.1 (#103) @dependabot
- Bump mkdocs-material from 6.2.8 to 7.0.2 (#104) @dependabot
- Bump watchdog from 2.0.1 to 2.0.2 (#105) @dependabot
- Bump tqdm from 4.57.0 to 4.58.0 (#106) @dependabot
- Bump tox from 3.21.4 to 3.22.0 (#101) @dependabot
- Bump watchdog from 2.0.0 to 2.0.1 (#99) @dependabot
- Bump pylint from 2.6.0 to 2.6.2 (#102) @dependabot
- Bump tqdm from 4.56.2 to 4.57.0 (#100) @dependabot
- Bump pytest-runner from 5.2 to 5.3.0 (#98) @dependabot
- Bump tqdm from 4.56.0 to 4.56.2 (#97) @dependabot
- Bump watchdog from 1.0.2 to 2.0.0 (#96) @dependabot
- Bump pre-commit from 2.10.0 to 2.10.1 (#95) @dependabot
- Bump mkdocs-material from 6.2.6 to 6.2.8 (#94) @dependabot
- Bump tox from 3.21.3 to 3.21.4 (#93) @dependabot
- Bump autopep8 from 1.5.4 to 1.5.5 (#92) @dependabot
- Bump tox from 3.21.2 to 3.21.3 (#87) @dependabot
- Bump mkdocs-material from 6.2.5 to 6.2.6 (#88) @dependabot
- Bump pytest from 6.2.1 to 6.2.2 (#89) @dependabot
- Bump coverage from 5.3.1 to 5.4 (#91) @dependabot
- Bump pre-commit from 2.9.3 to 2.10.0 (#90) @dependabot
- Bump tox from 3.21.1 to 3.21.2 (#84) @dependabot
- Bump mkdocs-material from 6.2.4 to 6.2.5 (#85) @dependabot
- Bump pytest-cov from 2.10.1 to 2.11.1 (#86) @dependabot
- Bump tox from 3.20.1 to 3.21.1 (#81) @dependabot
- Bump mkdocs-material from 6.2.3 to 6.2.4 (#82) @dependabot
- Bump tqdm from 4.55.1 to 4.56.0 (#83) @dependabot
- Bump tqdm from 4.55.0 to 4.55.1 (#80) @dependabot
- Bump mkdocs-material from 6.2.2 to 6.2.3 (#79) @dependabot
Published Date: 2021 March 18, 17:06
Minor updates and library updates. (v0.4.1)¶
What's Changed¶
- Updates and Minor updates (#78) @devsetgo
- Bump tqdm from 4.54.1 to 4.55.0 (#77) @dependabot
- Bump twine from 3.2.0 to 3.3.0 (#76) @dependabot
- Bump coverage from 5.3 to 5.3.1 (#74) @dependabot
- Bump mkdocs-material from 6.1.7 to 6.2.2 (#75) @dependabot
- Bump watchdog from 0.10.4 to 1.0.2 (#73) @dependabot
- Bump pytest from 6.1.2 to 6.2.1 (#71) @dependabot
- Bump wheel from 0.36.1 to 0.36.2 (#70) @dependabot
- Bump tqdm from 4.54.0 to 4.54.1 (#67) @dependabot
- Bump mkdocs-material from 6.1.6 to 6.1.7 (#68) @dependabot
- Bump pre-commit from 2.9.2 to 2.9.3 (#69) @dependabot
- Bump wheel from 0.36.0 to 0.36.1 (#66) @dependabot
- Bump wheel from 0.35.1 to 0.36.0 (#64) @dependabot
- Bump tqdm from 4.53.0 to 4.54.0 (#65) @dependabot
- Bump pre-commit from 2.8.2 to 2.9.2 (#61) @dependabot
- Bump mkdocs-material from 6.1.5 to 6.1.6 (#60) @dependabot
- Bump tqdm from 4.52.0 to 4.53.0 (#62) @dependabot
- Bump watchdog from 0.10.3 to 0.10.4 (#63) @dependabot
- Bump tqdm from 4.51.0 to 4.52.0 (#59) @dependabot
- Bump mkdocs-material from 6.1.4 to 6.1.5 (#58) @dependabot
- Bump mkdocs-material from 6.1.2 to 6.1.4 (#57) @dependabot
- Bump pre-commit from 2.8.0 to 2.8.2 (#55) @dependabot
- Bump mkdocs-material from 6.1.0 to 6.1.2 (#56) @dependabot
- Bump pytest from 6.1.1 to 6.1.2 (#52) @dependabot
- Bump pre-commit from 2.7.1 to 2.8.0 (#53) @dependabot
- Bump tqdm from 4.50.2 to 4.51.0 (#54) @dependabot
- Bump mkdocs-material from 6.0.2 to 6.1.0 (#51) @dependabot
- Bump tqdm from 4.50.1 to 4.50.2 (#49) @dependabot
- Bump tox from 3.20.0 to 3.20.1 (#50) @dependabot
- Bump pytest from 6.1.0 to 6.1.1 (#48) @dependabot
- Bump mkdocs-material from 6.0.1 to 6.0.2 (#47) @dependabot
- Bump flake8 from 3.8.3 to 3.8.4 (#45) @dependabot
- Bump tqdm from 4.50.0 to 4.50.1 (#44) @dependabot
- Bump bump2version from 1.0.0 to 1.0.1 (#46) @dependabot
- Bump tqdm from 4.49.0 to 4.50.0 (#42) @dependabot
- Bump black from 19.10b0 to 20.8b1 (#43) @dependabot
- Bump tqdm from 4.46.0 to 4.49.0 (#40) @dependabot
- Bump pytest from 5.4.2 to 6.1.0 (#39) @dependabot
- Bump coverage from 5.1 to 5.3 (#38) @dependabot
- Bump autoflake from 1.3.1 to 1.4 (#41) @dependabot
- Bump twine from 3.1.1 to 3.2.0 (#37) @dependabot
- Bump wheel from 0.34.2 to 0.35.1 (#34) @dependabot
- Bump pytest-cov from 2.9.0 to 2.10.1 (#36) @dependabot
- Bump watchdog from 0.10.2 to 0.10.3 (#35) @dependabot
- Bump mkdocs-material from 5.2.2 to 6.0.1 (#33) @dependabot
- Bump pylint from 2.5.2 to 2.6.0 (#32) @dependabot-preview
- Bump pre-commit from 2.4.0 to 2.7.1 (#31) @dependabot-preview
- Bump tox from 3.15.1 to 3.20.0 (#30) @dependabot-preview
- Bump flake8 from 3.8.2 to 3.8.3 (#29) @dependabot-preview
- Bump autopep8 from 1.5.2 to 1.5.4 (#28) @dependabot-preview
Published Date: 2020 December 26, 23:51
0.4.0 - save_csv options (v0.4.0)¶
[0.4.0] - Examples and Data¶
Added¶
- skipping version 0.3.0 and adding to 0.4.0
- Adding delimiter option to save_csv
- Tests to check if delimiter > 1 character
- set ',' if none
- Adding quotechar option to save_csv
- Tests to check if quotechar > 1 character
- set '"' if none
- Add test of non-list to save_csv
[0.3.0] - Examples and Data¶
Added¶
- Adding examples (see examples folder)
- Adding file_function documentation
- Adding documents site - https://devsetgo.github.io/devsetgo_lib/
Published Date: 2020 April 16, 21:54
Improvements (v0.2.0)¶
- Improved Tests
- Improved Errors
- Adding more logging
Published Date: 2020 January 26, 21:08
v0.1.1 (v0.1.1)¶
- New documentation
- fixes to pypi deployment
Published Date: 2020 January 26, 17:26
Beta Release (v0.1.0b2)¶
Basic Function (file and folder) Publish to PyPi (fixing PyPi publishing issues) Needs documentation.
Published Date: 2020 January 26, 13:03
Pypi Beta Release (v0.1.0b)¶
Change to semantic versioning - Publish to Pypi - Base Functions
Published Date: 2020 January 26, 12:53