Best Practices of Naming Conventions for database, tables, columns, fields.
Primary Key Indexes
Primary key indexes are automatically created when a primary key is defined on a table. They ensure uniqueness and fast retrieval of records based on the primary key.
Use Case:
Unique Indexes
Unique indexes ensure that all values in the indexed columns are distinct.
Use Case:
CREATE UNIQUE INDEX idx_user_email ON Users(user_email);
Non-Unique Indexes
Non-unique indexes improve the performance of queries by allowing quick lookups based on the indexed columns.
Use Case:
CREATE INDEX idx_email ON users (email);
Composite Indexes
Composite indexes (multi-column indexes) are created on multiple columns. They are useful when queries filter on multiple columns.
Use Case:
CREATE INDEX idx_user_name_email ON users (username, email);
Full-Text Indexes
Full-text indexes enable efficient searching of text data. They are particularly useful for searching large text fields.
Use Case:
CREATE FULLTEXT INDEX idx_article_content ON articles (content);
Partial Indexes
Partial indexes are created on a subset of rows in a table, based on a condition specified in the WHERE clause.
Use Case:
CREATE INDEX idx_active_users ON users (last_login) WHERE active = TRUE;
SELECT * FROM users WHERE active = TRUE ORDER BY last_login;
Clustered Indexes
Clustered indexes determine the physical order of data in a table. Each table can have only one clustered index.
Use Case:
CREATE CLUSTERED INDEX idx_order_date ON orders (order_date);
1) Avoid writing the raw query in Node.js and Laravel. Using an ORM can help write optimized and secure queries more easily.
2) Optimize queries
Avoid SELECT *: Fetch only the columns you need.
Avoid the N+1 Query Problem: Use joins or batch queries instead of writing queries in a loop.
3) Application Caching
Client Libraries: Use libraries like node-cache or redis for implementing caching in your application.
node-cache : for smaller data
Redis : for bigger data
4) Using PostgreSQL's Built-in Caching
PostgreSQL itself has various caching mechanisms built in, such as:
Shared Buffers: Caches frequently accessed disk blocks in memory.
Work_mem: Used for internal sort operations and hash tables before writing to disk.
Effective_cache_size: helps the query planner to estimate the amount of disk cache available from the operating system and PostgreSQL.
To optimize these settings, you can modify the postgresql.conf file:
shared_buffers = 1GB
work_mem = 64MB
effective_cache_size = 3GB
5) Views:
It needs to be used when you need to do multiple joins frequently.
A database view is a virtual table that is based on the result of a SQL query.
Views can simplify complex queries by encapsulating all the data in a single view.
6) Database Caching Engines:
If our requirement matches this requirement, then we need to use this tool.
Pgpool-II is a middleware tool that sits between PostgreSQL clients and PostgreSQL servers. It provides several features to enhance the performance, scalability, and availability of PostgreSQL databases. Some of the key features include connection pooling, load balancing, automatic failover, and query caching.
7) Db connection pooling
Use DB connection pooling for database connection management.
Max Connections: Set the max property in the pool configuration to control the maximum number of connections in the pool. The optimal number depends on your application and database server capabilities.
Idle Timeout: The idleTimeoutMillis setting controls how long a connection can be idle before being closed. This helps free up resources.
Error Handling: Always handle errors when working with pools to ensure connections are properly released back to the pool.
Resource Cleanup: Ensure that connections are released back to the pool after usage, typically done in a final block.
8) DB trip reduction techniques :
The store procedure will help to add multiple database queries in a single transaction, so you will perform multiple queries in a single trip.
9) Triggers :
Triggers are specialized stored procedures that automatically execute in response to certain events on a particular table or view in a database. You need to use them based on your requirements. it will help to reduce the database calls.
Trigger Events: The events that activate the trigger. Common events include:
INSERT: When a new row is added to a table.
UPDATE: When an existing row is modified.
DELETE: When a row is removed from a table.
Trigger Timing: Defines when the trigger's actions are executed relative to the triggering event:
BEFORE: The trigger fires before the operation is executed.
AFTER: The trigger fires after the operation is executed.
INSTEAD OF: For views, this allows the trigger to replace the usual operation.
Trigger Scope: Specifies whether the trigger operates at the row level (FOR EACH ROW) or statement level (FOR EACH STATEMENT).
Real use case : we have two tables. One is users, and the second is user_logs. When any record is updated in user, then automatically log entries will be added to user_logs via trigger. It will help to reduce DB calls.
Regular Backups: Implement a regular backup strategy to prevent data loss.
Setup a lifecycle manager in AWS.
For other hosting providers, use an automated backup service.
If no option is there, then you need to take a manual backup.
Automated or manual: The backup database should be restored for testing purposes. Sometimes the backup database will not be restored due to errors in the backup file.
Disaster Recovery Plan:
Develop and test a disaster recovery plan to ensure quick recovery from data loss incidents.
0