Modal Title
Data / Edge Computing

How to Choose and Model Time Series Databases

The right database will be able to handle the volume and complexity of the data being generated, integrate with existing systems, and be cost-effective. Here's some guidance on making the right choice for your use case.
Apr 28th, 2023 3:00am by
Featued image for: How to Choose and Model Time Series Databases
Image by Daniele Franchi from Unsplash. 

Choosing the right database is essential for any organization that wants to efficiently manage and analyze its time series data. The right database will be able to handle the volume and complexity of the data being generated, integrate with existing systems seamlessly, and be cost-effective.

However, selecting the wrong database can result in performance issues, data loss and a significant waste of time and resources. Therefore, it is crucial to carefully evaluate and choose a database that is best suited for the organization’s specific needs, considering factors such as data volume, query complexity, integration and cost.

Here’s how to evaluate your choices, and what to consider — along with some best practices for modeling time series data.

Evaluating a Time Series Database

Choosing the right time series database for your use case can be a daunting task, as there are many options available with varying features and capabilities. Here are some factors to consider.

Data volume and velocity. Consider the expected volume of time series data that you will be collecting and storing. Choose a database that can handle the expected data volume, and that can scale as those volumes increase over time.

Query complexity. Consider the types of queries that you will be running. Some databases are better suited for simple queries, while others offer more advanced query languages and functions for complex analytics. Choose a database that can handle the complexity of your queries, and that offers a query language that is well-suited for your use case.

Integration with existing systems. Consider the systems that you already have in place, such as monitoring and analytics tools, and choose a database that can integrate seamlessly with those systems. This will make it easier to manage and analyze your time series data.

Security. Choose a database that offers robust security features, such as encryption and access control, and meets your data’s security requirements.

Cost and licensing. Consider the database’s cost, as some features and capabilities may bring a higher price tag. Also think about long-term costs, including licensing fees, maintenance costs and scalability.

Support and community. Finally, consider the support and community around the time series database. Look for databases with active development and a strong community of users who can provide support and share best practices.

Best Practices for Modeling Time Series Data

To make sure your data set is useful for analysis and decision-making, follow these best practices:

Define the Business Problem

Before collecting any data, it’s important to clearly define the business problem you are trying to solve. What do you want to learn from this data? This will help you determine the appropriate granularity and time interval you’ll need to run queries and do analysis.

Choose the Granularity and Time Interval

Choosing the right granularity and time interval is crucial for accurate analysis, efficient storage and efficient query processing. It involves finding the right balance between capturing enough detail to support your analysis and keeping storage space requirements and computational costs manageable.

Granularity refers to the level of detail captured by each data point in a time series, while time interval determines how often data points are recorded.

One of the key tradeoffs when selecting granularity is the balance between detail and storage space. High granularity means capturing more detail in each data point, which provides more information for analysis but also takes up more storage space. On the other hand, low granularity captures less detail but requires less storage space.

The choice of granularity and time interval also depends on the type of data being collected and the specific use case. For instance, in applications like sensor data collection, where data is generated continuously, a higher granularity is often necessary to capture short-term changes accurately. In contrast, for data generated periodically, such as sales data, lower granularity is often sufficient.

Selecting the time interval also involves tradeoffs between data accuracy and storage space. A shorter time interval provides more data points but also increases storage requirements, whereas a longer time interval reduces the storage requirements but may miss important events that occur between measurements.

In addition to these tradeoffs, consider the downstream analysis that will be performed on the data. For example, if your analysis requires calculating daily averages, a time interval of one hour may provide too much data and lead to unnecessary computational costs.

Plan for Missing or Incomplete Data

Missing data is a common issue in time series data. Make a plan for how to handle it, whether it’s through imputation, interpolation or simply ignoring the missing data.

Normalize Your Data

Time-series data often comes from multiple sources with different units of measurement, varying levels of precision and different data ranges. To make the data useful for analysis, it’s essential to normalize it.

Normalization is the process of transforming data into a common scale to facilitate accurate comparison and analysis. In the next section, we’ll explore the importance of normalization for accurate comparisons and analytics, and methods for normalizing data.

Why Normalization of Time Series Data Matters

Normalization is essential for accurate data comparisons and analysis. If the data is not normalized, it may lead to biased results or incorrect conclusions.

For example, imagine comparing the daily average temperature in two cities. If one city reports the temperature in Celsius, and the other city reports it in Fahrenheit, comparing the raw data without normalizing it would produce misleading results. Normalizing the data to a common unit of measurement, such as Kelvin, would provide an accurate basis for comparison.

Normalization is also crucial for analytics, such as machine learning and deep learning. Most algorithms are sensitive to the magnitude of data, and normalization helps ensure that the algorithm weights each input feature equally. Without normalization, some features with large ranges may dominate the model’s training process, leading to inaccurate results.

Methods for Normalizing Data

There are several methods for normalizing data, including:

Min-max scaling. This method scales the data to a specific range, usually between 0 and 1 or -1 and 1. It’s calculated by subtracting the minimum value of the data and then dividing it by the range between the maximum and minimum values.

Z-score normalization. This method scales the data to have a mean of 0 and a standard deviation of 1. It’s calculated by subtracting the mean of the data and then dividing it by the standard deviation.

Decimal scaling. This involves scaling the data by moving the decimal point of each value to the left or right. The number of decimal places moved depends on the maximum absolute value of the data.

Log transformation. This involves transforming the data using a logarithmic function. It’s often used for data with a wide range of values.

Choosing an Appropriate Data Type

Different types of data can be stored in different formats, and choosing the wrong data type can lead to inefficient storage and retrieval, as well as inaccurate analysis.

The first consideration is to choose a data type that accurately represents the data being collected. For example, if you are collecting temperature data, you may choose a data type of float or double precision, depending on the precision required. If you are collecting binary data, such as on/off states, you may choose a Boolean data type.

Another consideration is the size of the data. Using the smallest data type that accurately represents the data can help to reduce storage requirements and improve query performance.

For example, if you are collecting integer values that range from 0 to 255, you can use an unsigned byte data type, which only requires one byte of storage per value. This is much more efficient than using a larger data type such as an integer, which requires four bytes of storage per value.

Consider the size of the database and the volume of data being collected. For example, if you are collecting high-frequency data, such as millisecond-level sensor data, you may want to use a data type that supports efficient compression, such as a delta-encoded data type. This can help to reduce storage requirements while maintaining data fidelity.

Finally, consider the types of operations that will be performed on the data. For example, if you will be performing aggregate operations such as summing or averaging over large data sets, you may want to use a data type that supports efficient aggregation, such as a fixed-point decimal data type.

How to Detect and Handle Outliers

Outliers are data points that deviate significantly from the expected values in a time series. They can occur due to various reasons such as measurement errors, equipment malfunctions, or actual anomalies in the data. Outliers can have a significant impact on the analysis of time series data, as they can skew results and affect the accuracy of models.

Detecting and handling outliers in time series data is an important task to ensure accurate analysis and modeling. Here are some methods for doing so.

Visual inspection. One of the simplest methods for detecting outliers is simply by looking for them in your data. Look for data points that are significantly different from the rest of the data.

Statistical methods. Various statistical methods can be used to detect outliers in time series data, such as the z-score method, which identifies data points that are more than a certain number of standard deviations away from the mean.

Machine learning algorithms. Algorithms can be trained to detect outliers in time series data. For example, an autoencoder neural network can be trained to reconstruct the time series data and identify data points that cannot be reconstructed accurately.

Once outliers are identified, there are several methods for handling them, including:

Removing outliers. The simplest method for handling outliers is to remove them from the data set. However, this approach can result in the loss of valuable information and should be used judiciously.

Imputing values. Another approach is to impute the values of the outliers based on the surrounding data points. This method can be useful when the outliers are caused by measurement errors or other minor anomalies.

Model-based approaches. These can be used to handle outliers by incorporating them into the model. For example, robust regression models can be used that are less sensitive to outliers.

Best Practices for Managing Time Series Databases

Maintaining a time series database requires careful planning and execution to ensure its smooth operation and optimal performance. Here are some best practices for managing and maintaining these databases, covering four essential areas: monitoring and troubleshooting, backup and disaster recovery, scaling and performance tuning, and security and access control.

Monitoring and Troubleshooting

To ensure the optimal performance of a time series database, it must be monitored continuously. Monitoring can help detect issues such as slow queries, high CPU utilization and network latency. Various tools can be used to monitor time series databases, including monitoring dashboards, alerts, and logging. These tools can help identify and isolate issues quickly and efficiently, reducing downtime and ensuring business continuity.

Methods for monitoring and troubleshooting time series databases are essential for ensuring the availability and performance of the database.

Some common methods for monitoring and troubleshooting time series databases include:

Monitoring tools. There are several monitoring tools available for time series databases, such as Prometheus, InfluxDB, and Grafana. These tools can provide real-time metrics and alerts on the performance of the database, such as query response time, CPU and memory usage, and disk space utilization.

Logging. A critical tool for troubleshooting issues in time series databases,  logging can help identify errors, warnings and other issues that may be affecting the performance of a database. It’s important to configure logging to capture the necessary information for diagnosing issues, such as query logs and server logs.

Performance tuning. Performance tuning is the process of optimizing the database for maximum performance. This can involve adjusting the configuration settings of the database, such as buffer sizes, thread pools and cache sizes, to ensure that the database is running efficiently.

Load testing. This involves simulating a high volume of traffic on the database to test its performance under stress. This can help identify any bottlenecks or performance issues that may be affecting the database.

Health checks. Regular health checks can help detect and prevent issues with the database. These checks can include verifying the database’s disk space, checking for hardware failures, and testing the database’s failover and recovery mechanisms.

Backup and Disaster Recovery

To protect the data against loss or correction, it’s crucial to back up your time series database. Disaster recovery plans should be in place to ensure that the business can recover from any unforeseen events. The frequency with which you should run backups should be determined by the value of your data and the potential impact of its loss.

Here are some methods for backup and disaster recovery:

Regular backups. One of the most common and effective methods for backing up time series data is to take regular backups at specified intervals, and to perform them for different levels, such as the file system, database, or application.

Replication. Replication involves creating a copy of your data and storing it on a different system. This can be a good strategy for disaster recovery as it provides a redundant copy of your data that can be quickly accessed in case of a system failure. Replication can be synchronous or asynchronous; consider the cost and complexity of replication when deciding on a backup strategy.

Cloud-based backups. Many time series databases offer cloud-based backup solutions that can automatically back up your data to remote servers. This can be a good option for organizations that want to ensure their data is protected without having to manage backup infrastructure themselves.

Disaster recovery plans. In addition to backups, having a comprehensive disaster recovery plan in place is essential for quickly restoring data in case of unexpected events. A disaster recovery plan should include procedures for restoring backups, identifying critical systems and data, and testing recovery procedures regularly.

Monitoring and testing. Regularly monitor and test backup and disaster recovery procedures to ensure that they are working effectively. Regular testing can identify potential issues and help optimize backup and recovery processes.

Scaling and Performance Tuning

Time series databases can grow significantly over time, and as such, they require scaling to ensure optimal performance. Scaling can be achieved horizontally or vertically, and the choice of method depends on the specific use case. Performance tuning involves optimizing the database’s configuration and queries to ensure that queries are processed efficiently and quickly.

To ensure that a time series database can handle these increasing demands, here are several methods to help scale the database and fine-tune its performance:

Partitioning. Partitioning is a technique for dividing data into smaller subsets, making it easier to manage and query. By partitioning data, queries can be executed more efficiently, and it can reduce the amount of data that needs to be scanned, resulting in faster query response times.

Indexing. Indexing helps optimize query performance by creating indexes on columns that are frequently used in queries. This allows the database to quickly find relevant data and return results faster.

Caching. Caching is a technique for storing frequently accessed data in memory to reduce the need for repeated access to disk storage. This can significantly improve query response times and overall database performance.

Compression. This method reduces the storage space required for data by encoding it in a more compact format. By compressing data, storage costs can be reduced, and query response times can be improved as less data needs to be read from the disk.

Sharding. Sharding is a technique for horizontally partitioning data across multiple servers, allowing for greater scalability and higher throughput. This method is particularly useful for databases that are experiencing rapid data growth and requires more processing power to handle queries.

Hardware upgrades. Upgrading hardware, such as increasing the amount of RAM or adding more CPU cores, can significantly improve the performance of a time series database. By increasing the processing power and memory available, queries can be executed faster, and the database can handle more concurrent requests.

Security and Access Control

Time series data can be sensitive and valuable, and as such, it requires robust security measures. Access control is crucial to controlling who can access the data and what actions they can perform.

Here are some methods for securing time series databases, to protect sensitive data and prevent unauthorized access.

Role-based access control (RBAC). RBAC, a widely used approach for controlling access to databases, assigns specific roles to users, which determine their level of access. For example, an administrator may have full access to the database, while a regular user may have read-only access.

Encryption. Encryption is the process of encoding data in a way that only authorized parties can read it. Time series databases can use encryption to protect data at rest and in transit.

Authentication and authorization. Authentication ensures that users are who they claim to be, while authorization determines what actions they can perform. Strong authentication and authorization policies can prevent unauthorized access and data breaches.

Network security. Time series databases can be secured by implementing network security measures, such as firewalls, intrusion detection systems and virtual private networks (VPNs). These measures can help protect against external threats and prevent unauthorized access to the database.

Audit trails. These can be used to track who has accessed the database and what changes have been made. This can help detect unauthorized access and prevent data tampering.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.