Which Database Model to Choose?
Summary
TLDRThis video script explores the challenges of data modeling for scalable applications, comparing various database types. It highlights the limitations of traditional relational databases and introduces alternatives like key-value, column-family, document, and graph databases, each with their own advantages and use cases. The script discusses performance, scalability, consistency, and the importance of choosing the right database model for specific needs, emphasizing the trade-offs between speed, complexity, and data integrity.
Takeaways
- 😌 The biggest challenge in app development isn't coding but data modeling at scale to avoid performance issues.
- 📊 Traditional relational databases still dominate the market with 72% share, using tables and relationships well-suited for most apps but can be a bottleneck with growing data complexity.
- 🔄 Alternative data models like graph and wide-column stores can handle complexity and scale better, but choosing the right one depends on the project's unique needs.
- 🔑 Key-value databases are simple, fast for data retrieval, and well-suited for in-memory storage, providing sub-millisecond response times.
- 💡 While in-memory storage like RAM is fast, it's not practical for all database types due to cost and the need for data persistence on disk.
- 🚀 Key takeaway: Key-value databases are optimized for high performance and low latency, making them ideal for caching frequently used data.
- 📚 Wide-column stores are interesting for their storage in column families and are highly partitionable, allowing horizontal scaling but not optimized for analytical queries.
- 🔍 Document databases excel at storing related information in a single document, simplifying data handling but risking data duplication and inconsistency if not managed carefully.
- 🔗 Relational databases are best for transactional processing with strong ACID guarantees, ensuring data integrity and consistency, but can struggle with horizontal scaling.
- 🌐 Graph databases are powerful for complex, multi-hub relationships, offering fast queries by traversing relationships directly without joins, but require expertise to manage.
- 🛠️ Each database type has its use cases and limitations; choosing the right one involves understanding the project's requirements for data structure, scalability, and query complexity.
Q & A
What is the biggest challenge in app development according to the script?
-The biggest challenge in app development is not writing code, but figuring out how to model data in a way that works at scale to prevent issues like slow performance, data inconsistencies, and difficulties in adding new features.
Why might traditional relational databases become a bottleneck as data grows in size or complexity?
-Traditional relational databases can become a bottleneck because they use tables and relationships to model data, which may not scale efficiently as data size or complexity increases.
What are some alternative data models mentioned for handling complex data?
-Alternative data models mentioned include graph databases, which can handle complexity, and wide-column databases, which can scale data at high levels.
How does a key-value database model data and what are its advantages?
-A key-value database models data as a collection of key-value pairs with unique identifiers, allowing for fast access. It uses a hash table to store keys and pointers to data values, making data retrieval very fast and efficient.
Why are key-value databases often stored in memory and what is the benefit?
-Key-value databases are often stored in memory due to their simple model and small data set, which allows for blazing-fast data retrieval, sometimes with sub-millisecond response times.
What are the limitations of storing all database types in memory?
-Limitations include the cost of memory, the need to persist data on disk for mission-critical apps to prevent data loss in case of a crash, and the fact that larger data sets can slow down the system regardless of the speed of the storage medium.
How do wide-column databases differ from traditional relational databases?
-Wide-column databases store data in column families and are not optimized for analytical queries that require filtering across multiple columns, joins, or aggregations. They can only be searched using the primary key, unlike relational databases.
What is the significance of the primary key in wide-column databases?
-The primary key in wide-column databases consists of one or more partition keys and zero or more clustering keys. It is used to distribute data across multiple nodes and sort data within a partition, enabling horizontal scaling.
Why are document databases a good match for object-oriented programming?
-Document databases are a good match for object-oriented programming because they allow data to be stored in a format that can be naturally represented as an object, such as JSON, without the need for translation.
What are the main benefits of using a graph database?
-Graph databases excel at handling complex, multi-hub relationships between entities, allowing for fast and efficient querying of densely connected data without the need for expensive join operations.
What are some trade-offs of using a document database for transactional processing?
-Document databases may not be the best choice for transactional processing due to the lack of enforced referential integrity, which can lead to data inconsistencies if changes to one document are not reflected in related documents.
Outlines
🤖 Data Modeling Challenges and Database Options
The paragraph discusses the critical challenge of data modeling at scale, emphasizing that it surpasses the complexity of coding an app. It highlights the limitations of traditional relational databases when dealing with large or complex datasets, which can lead to performance issues. The script introduces alternative data models like graph and wide-column databases, which offer scalability and complexity management. It promises an exploration of various database types, their advantages, and disadvantages, to help make an informed decision based on unique project needs. It also touches on the concept of key-value databases, their efficiency in handling unstructured data, and their use of hash tables for fast data retrieval, noting that these databases are often stored in memory for rapid access.
🔑 Key-Value Stores: In-Memory Performance and Limitations
This paragraph delves into the specifics of key-value stores, their suitability for caching due to their ability to quickly access data using unique keys, and their support for various data types. It points out that these databases are optimized for high-performance applications with low latency requirements. However, it also notes the simplicity of the key-value model, which makes it unsuitable for complex data structures and dynamic queries involving multiple tables. The paragraph also mentions Memcached and Redis as examples of key-value stores, highlighting Redis's capabilities for multi-model data storage and its design for high performance and horizontal scalability, but also noting the trade-offs involved in achieving strong transactional consistency.
📚 Wide-Column Stores: Horizontal Scaling and Data Partitioning
The focus shifts to wide-column stores, which organize data into column families and are optimized for horizontal scaling. The paragraph explains the concept of primary keys, consisting of partition and clustering keys, and how they enable data distribution across multiple nodes. It discusses the challenges of querying random attributes and the need for data modeling that anticipates query patterns to avoid full table scans, which can be slow. The paragraph also addresses the issue of data duplication and the trade-offs involved in the denormalized form of data storage in wide-column databases, which can lead to inconsistencies.
📝 Document Databases: Flexibility and Denormalization
The paragraph introduces document databases, which store related data in a single document, as an alternative to the strict rules of relational databases. It discusses the benefits of this model, such as ease of handling data, faster data retrieval, and the elimination of the need for joins. However, it also warns of the potential for data duplication and the resulting inconsistencies if not managed carefully. The paragraph highlights the importance of choosing the right use case for document databases and the need for proper indexing and constraints to maintain data consistency and optimize query performance.
🔗 Relational Databases: ACID Transactions and Data Integrity
This paragraph underscores the enduring dominance of relational databases, particularly in industries like finance and e-commerce, due to their ability to model relational data clearly and maintain data integrity through normalization. It explains the process of normalization and its importance in organizing data to prevent duplication and ensure consistency. The paragraph also discusses the challenges of scaling relational databases horizontally and the complexities involved in maintaining data consistency when partitioning data. It concludes by emphasizing the strength of relational databases in transactional processing,得益于 their ACID (Atomicity, Consistency, Isolation, Durability) guarantees, which ensure the reliability and integrity of stored data.
🌐 Graph Databases: Navigating Complex Relationships
The final paragraph explores graph databases, which represent entities as nodes and relationships as edges, allowing for direct storage of connections between data points. It illustrates the efficiency of graph databases in handling queries involving densely connected data, as they eliminate the need for expensive join operations. The paragraph also touches on the challenges of managing and maintaining graph databases, especially at scale, and the need for expertise in dealing with complex graphs. It concludes by discussing the scenarios where graph databases excel, such as in data centers where complex multi-hub relationships need to be traversed quickly and efficiently.
Mindmap
Keywords
💡Data Modeling
💡Relational Databases
💡Graph Databases
💡Column-Family Stores
💡Key-Value Stores
💡In-Memory Storage
💡Data Consistency
💡Horizontal Scalability
💡Document Databases
💡ACID Properties
💡Data Integrity
Highlights
The biggest challenge in app development is data modeling for scalability, not just coding.
Relational databases dominate the market with a 72% share, using tables and relationships for data modeling.
Alternative data models like graph and wide-column databases can handle complexity and scale better than relational databases.
Key-value databases are simple, fast, and efficient for data retrieval using unique identifiers.
In-memory storage provides blazing-fast data retrieval but is limited by RAM size and cost.
Storing entire databases in CPU cache memory is impractical due to high cost and data size limitations.
Key-value databases are well-suited for in-memory storage, offering faster responses.
Memcache and Redis are examples of key-value stores, with Redis offering multi-model database capabilities.
Wide-column stores are optimized for high partitionability and horizontal scaling.
Document databases like MongoDB are ideal for object-oriented programming and denormalized data storage.
Document databases can struggle with maintaining data consistency across related entities due to lack of referential integrity.
Relational databases excel in transactional processing with strong ACID guarantees.
Graph databases are optimized for querying complex relationships and can perform faster than relational databases for such tasks.
Graph databases require expertise to manage and can be challenging to distribute across multiple nodes.
The benefits of graph databases become more evident with complex, multi-hub relationships between entities.
Scaling a relational database horizontally can be difficult due to the reliance on relationships between tables.
Data modeling is crucial for maintaining data integrity and preventing issues like data inconsistency.
Transcripts
the biggest challenge when writing an
app isn't writing code but rather
figuring out how to model data in a way
that works at scale if you don't put
enough thought into it your app could
suffer from slow performance data
inconsistencies and difficulties in
adding new features not good we have the
old school relational databases which
are still leading the space with a
market share of 72 percent they use
tables and relationships to model data
which is great for most apps however
when your data starts growing in size or
complexity it can become a bottleneck
that's when you might want to consider
alternative data models like graph
databases which can handle complexity or
white column databases that can scale
data at astonishing levels of course
with so many options available it can be
tough to know which one is the best fit
for your project but don't worry we'll
break down the different types of
databases with their pros and cons so
you can make a decision that works for
you so let's start the journey to find
the perfect database for your unique
needs
imagine we have large amounts of
semi-structured data and we assign it a
set of unique identifiers we just
created a collection of key value pairs
for fast access so this model is
flexible enough for unstructured data
this type of database implements a hash
table to store unique Keys along with
the pointers to the corresponding data
values since the data structure is
basically an index it's very fast and
efficient for data retrieval it uses a
hash function to quickly calculate the
location for storage based on the key
then it uses the same key to quickly
locate the corresponding value in memory
in constant time
since the model is so simple and the
data set is rather small these databases
are often stored in memory this makes
data retrieval blazing fast sometimes
with sub millisecond response time other
data models such as relational and
document based are not as suited for
in-memory storage this is because they
tend to have more complex data
structures with fields and columns and
also relationships and that can require
more memory and processing power to
handle but how much data can we store in
memory since Ram is so fast why don't we
load all database types in memory some
people may argue that today we can store
huge amounts of data in memory and there
are database clusters with zillions of
nodes that keep data in memory let's
consider that the cost is not a problem
although we should take a glance at it
first we should consider that for
Mission critical apps we would need to
persist data on disk as well because in
case of a crash we would lose some or
all the data there are two main ways to
synchronize the Ram with the disk but
they both significantly affect the
response time from nanoseconds to
milliseconds second no matter how fast
the storage medium is in the end the
size of the data will make the system
slower why don't we store the entire
database in the CPU cache memory this is
considered to be the fastest first
because the cost will be very high
second because the size of the data
determines how fast the data is
retrieved that's why even the CPU cache
memory has three layers as the rule of
thumb if we want blazing fast responses
for a set of data the size should be
relatively small so key takeaway number
three is that key value databases are
well suited to be stored in memory which
in turn provides faster responses
finally in this category we can mention
memcache and radius although nowadays
redis offers the possibility of
multi-modal database
simply put key value stores are not
designed for complex data structures so
if you need to execute Dynamic queries
or perform complex aggregations based on
multiple tables then you should look at
document or relational databases
foreign
databases like redis are designed for
high performance and horizontal
scalability rather than strong
transactional consistency although
Reddit supports executing multiple
commands as a single Atomic transaction
using the feature of multi-command
transactions or using lower scripting it
doesn't support the full acid by default
it requires some tricks and
configurations to reach the acid
properties and they usually come with
trade-offs
and finally key value stores are not
well suited for data warehousing this is
because they are not designed to store
large amounts of historical data and
they don't provide features such as data
compression and indexing
traditional SQL databases were designed
for functionality rather than speed at
scale so a cache is often used to store
the replies of housely queries from the
relational database to reduce latency
and significantly increase throughput
caching it's all about quickly accessing
frequently used data and key value
stores are perfectly designed to do just
that key value stores are perfect for
caching because they can quickly
retrieve data using a unique key rather
than searching through a large data set
also key value stores allow for many
data types as value including linked
lists and hash tables furthermore they
are stored in memory which further
increases the access speed so key value
databases are optimized for high
performance and low latency applications
however this data model might be too
simple for other use cases so we'll move
on to the next data model in terms of
complexity
key value stores are fun and simple next
with white column stores things start to
get interesting this databases stored
data in column families although they
look similar to the tables in a
relational database they are not
actually tables we'll realize this one
will try to make a query on a random
attribute and we won't be able to do it
this is because we can search only by
using the primary key similar to the key
value stores so this model is not
optimized for analytical queries that
requires filtering across multiple
columns tables joins or aggregations
speaking of the primary key this is one
of the most important concept of white
column databases a primary key consists
of one or more partition keys and zero
or more clustering Keys sometimes called
sort keys for instance in Cassandra each
data set is partitioned by a partition
key which is a combination of one or
more columns basically we have a tool
integrated in our data model to split
the data set and distribute it on
multiple nodes we see that white columns
databases are highly partitionable and
allow for horizontal scaling at the
magnitude that other types of databases
cannot achieve so here the partition key
is used to distribute data on multiple
partitions or nodes and the clustering
key is used to sort data within a
partition so a key takeaway here is that
white columns databases are highly
partitionable
white columns databases are storing data
in the normalized form this means that
all data related to a particular item is
stored together in a single row rather
than being spread out across multiple
tables this allows for faster data
retrieval and easier querying you don't
have to flip back and forth between
multiple tables and do joins to get all
the information you need all information
is in one place however this will be at
the cost of potentially having some
duplicates and duplicating data is the
root of all data inconsistencies among
other problems we'll see next so the key
takeaway here is that white columns
databases are storing data in the
normalized form
trying to find a row for a random
attribute it's like trying to find a
needle in a haystack but instead of a
needle you're looking for a specific
piece of data and instead of a haystack
you're looking on the entire cluster
that can have hundreds of nodes you
probably know that scanning a full table
can be a really slow process now imagine
that you have to scan hundreds of tables
to find a piece of data here to avoid
this problem we'll make use of the
category attribute as a partition key
this means that if you know that you're
going to need to search by a specific
attribute you'll have to model the data
in a way that puts that attribute as a
partition key basically you will
partition all data based on that
attribute but what if you need to filter
data by multiple individual attributes
then you'll have to create a new table
for each query pattern this can create a
lot of duplicated data in addition to
the denormalization duplication but
that's okay because white columns
databases are really fast for rights
Jokes Aside if you need to do a lot of
filtering or analytic queries white
columns stores are not the best option
for transaction processing consistency
is key however by default white columns
databases are eventually consistent this
means that the data will be eventually
consistent across all the nodes in the
cluster and it doesn't guarantee that
all nodes will be consistent at the same
time it's normally much more expensive
in terms of latency and availability to
work with transactions in such an
environment that's why white columns
databases such as Cassandra offer the
option of lightweight transactions
however these are still quite expensive
in multi-node environments where
multiple round trips are necessary
between the coordinator and the other
nodes so white column cdbs are not the
best option for acid transactions
adding new nodes to a Cassandra cluster
is as simple as adding new blocks of
Legos and it's the same for removal data
partitioning is embedded into the data
model which means it can be easily
distributed across multiple nodes in the
cluster this makes horizontal scaling a
breeze but what happens with the
existing data when a new node is added
is the whole data redistributed in order
to maintain an even distribution of data
not really because that would be way too
costly Cassandra uses the concept of
consistent hashing and virtual nodes to
minimize the amount of data that needs
to be moved around the cluster this
algorithm also ensures that the data is
evenly distributed across all nodes we
have a separate video on consistent
hashing and virtual nodes so please
check it if you want to find out more so
if adding a node is so simple we can
scale horizontally as much as we want in
fact it has been reported that Apple are
using 1000 Cassandra clusters with 300
000 nodes and storing 100 petabytes of
data for multiple use cases such as
iCloud and Siri so white columns
superpower is horizontal scalability
white columns databases are considered
to be good for rights for two main
reasons first it uses a right optimized
storage architecture which allows it to
handle huge number of overrides very
quickly for instance Cassandra uses a
technique called log structured storage
which allows it to write data on disk in
large sequential blocks because of this
principle it doesn't have to spend time
to look where the data is stored in real
time it will deal with it later in
batches reason number two for fast
rights is because of its partitioned
architecture which allows for rights to
be executed in parallel on multiple
nodes at the same time
instead of spreading data across
multiple tables and then join them back
together like in a scavenger hunt a
document database puts all the
information related to an entity in a
single document a document database is
the classic example of denormalization
using something like mongodb it's like
giving your data a break from all the
strict rules and regulation of a
traditional relational database instead
of splitting data into multiple tables
and establishing relationships between
them you just store or related
information within a single document
this is truly a more convenient way to
handle data but sometimes this may lead
to some duplication of data and if data
duplication gets out of hand you'll
enter into the hell of data
inconsistencies where if one copy of the
data is updated it may not be updated in
other copies leading to conflicts and
inconsistent information data
duplication can lead to all sorts of
problems in a chain so you just need to
be careful to choose the right use case
for the document database if you have a
lot of relations between different
entities then documentdb might not be
the best choice
the ability to store data in any format
allows for fast prototyping and it
eliminates the need to spend time on
defining the schema and creating tables
so this speeds up development however
without proper constraints it can be
difficult to maintain consistency in
data across different documents and this
can limit the types of queries that we
can perform on the data therefore for
more complex use cases you would still
need to think carefully about how you
want to model your data and ensure that
you have the appropriate indexes and
constraints in place
document databases often have more
advanced indexing capabilities they
support secondary indexes with the
following types simple compound
geospatial unique or full text indexing
so make sure to index your data
correctly and understand the performance
implications of different types of
indexes without proper indexes mongodb
can have poor performance especially
when working with large sets of data
with indexes it's easier to optimize
queries and improve performance this
will allow you to perform complex
queries on huge amounts of data like no
other data model
if you need to handle a lot of complex
relationships a documented database may
not be the best choice in fact document
databases such as mongodb recommend
embedding documents instead of using
one-to-many or one-to-one relationships
this is the general rule unless there is
a compelling reason not to do so but you
can actually model some relations in a
document database but you will not have
the same level of features and integrity
second joining data from multiple tables
can be a resource intensive operation
this can slow down query performance and
this is where relational databases
sometimes struggle
as the size of the data grows join
operations become more and more
expensive now in a documented database
like which is highly scalable this
could mean a significant performance
impact on the entire database system
furthermore maintaining data
consistencies between related entities
can be a difficult task in document
database this is because there is no
enforced referential integrity and
changes to one document may not be
reflected in others related documents
a document oriented database is the
perfect match for object-oriented
programming one side can express the
model in its natural language usually an
object that can be represented as a Json
and the other side can understand it
without any translation this is not the
case with object-oriented programming
and relational databases for decades it
has been attempted to close the gap with
different Frameworks and tricks but they
just don't mix so well
so documented databases are easy to
scale they provide indexing powerful ad
hoc queries and analytics and they also
have some features for transactional
support
foreign
databases have been the dominant choice
for data storage for decades and their
popularity only continues to grow
despite the rise of alternative
databases such as nosql relational
databases remain unstable in many
Industries especially in finance and
e-commerce there are several reasons for
their continuous dominance first all
data in most applications is relational
customers make orders orders contain
products and products are found in
stores and so on
furthermore the relational model with
its tables rows and columns provide a
clear and straightforward way to model
the data making it easy for developers
to work with
before making use of relational
databases you need to model your data
according to the strict rules of
normalization or you can just roll on
your intuition and you might learn the
hard way why some things need to be done
in a certain way normalization is the
process of organizing data in Separate
Tables it's like organizing your closet
and just as you might separate your
shirts from your pants normalization
involves breaking up data into smaller
more manageable pieces these rules help
to prevent clutter and duplication and
improve data Integrity but what does
data integrity mean just like a tidy
room gives you a peace of mind that
everything is in place data Integrity
gives you peace of mind that your data
is consistent accurate and not damaged
or lost this sounds easy to achieve
until you have hundreds of concurrent
transactions with a lot of cash involved
scaling horizontally a relational
database can be a difficult task to
achieve although there are solutions for
scaling a relational database such as
replication and sharding they usually
require a significant added complexity
both in terms of infrastructure and
administration to be able to scale a
database you need to partition it
however relational databases rely on
relationships between tables and
partitioning the data can break these
relations making it difficult to ensure
data consistency and integrity so if you
need to store large amounts of data
especially less structured data than a
noise skill database might be more
suitable
when it comes to transactional
processing relational databases are the
best in town a big part of their success
can be attributed to their
well-established acid guarantees we have
atomicity consistency isolation and
durability and these four ensure the
reliability and integrity of stored data
while other database models also support
the acid properties for transactions
relational databases are still
considered the best option this is
because the structures of tables and
relationships makes it easier to enforce
consistency and maintain data Integrity
which is critical for transactions in
particular cases other database models
May struggle to comply to all acid
properties for instance ensuring
consistency and isolation can be
difficult because multiple transactions
may be executed concurrently now if we
consider a Distributive system like a
white column database with many nodes it
can be even more challenging to ensure
that each transaction has a consistent
view of the data and things get more
complex when networking connections
might fail or one node might
successfully complete its part of the
transaction and then be required to roll
back its changes because a failure
occurred on another node however the
trade-off for strong consistency is not
being able to scale as much or as easy
foreign
database data is stored as a connected
graph the nodes in the graph represent
entities such as tweets users tags and
the edges represent the relationships
between these entities such as follows
or mention let's say we want to get the
top 10 tags used in all messages by a
certain user in a relational database we
would have to do a join between the tags
and the Tweet tables which will
basically result in a separate table
however in graph stores relationships
between nodes are stored directly on the
nodes rather than Separate Tables
because of this principle graph
databases don't need to compute the
relationships between data at query time
the connections are already there stored
on the nodes because of this queries
with densely connected data are orders
of magnitude faster
graph databases eliminate the need for
expensive join operations making data
maintenance a breeze
this model is powerful enough to cover
the most complex data structures for
instance neo4j was used to build a
Knowledge Graph at Nasa however to
properly manage and maintain a graphs
database it requires a certain level of
expertise unlike other types of
databases graph DBS can be challenging
to learn and manage especially when
dealing with large intricate graphs so
be prepared to invest some time and
effort for getting up to speed
now graph databases are pretty difficult
to model on a single node but what
happens when you need to distribute the
graph on multiple nodes well you'll just
need to consider a lot of stuff such as
how to distribute the edges across the
nodes or how to balance the graph data
evenly and if these are not hard
problems then what if some node is
failing or what about Dynamic node
addition or removal
and the list goes on
while graph databases are optimized for
traversing and querying relationships
they may not be the best choice for
write heavy workloads in order to
support a high volume of Rights you need
to write to multiple nodes in parallel
however the overhead of maintaining the
graph structure connected the cross
nodes will slow down the scaling pretty
quickly and therefore the right
throughput and there is also a high risk
of the time consistency and conflicts
other models such as key value or white
columns are much more suitable for write
heavy loads graph databases can become
quite large and unmanageable especially
when dealing with complex relationships
so be prepared to invest in some serious
Hardware resources if you want to use a
graph database
foreign
benefits of a graph database become more
pronounced when dealing with complex
multi-hub relationship between entities
for example in a data center scenario it
may be necessary to Traverse several
relationships to find all the switches
of a particular Data Center and then
another Hub to find all the interfaces
of that data center in a graphs database
this can be achieved in a single
traversal making the query much faster
and more efficient in contrast
relational databases typically store
relationships between entities as
foreign keys in Separate Tables
requiring expensive join operation to
Traverse the relationships between
entities this can result in slow and
complex queries particularly when
dealing with densely connected data
関連する他のビデオを見る
![](https://i.ytimg.com/vi/ym0cXSKZYnw/hq720.jpg?v=63351df9)
Types of Databases | Criteria to choose the best database in the System Design Interview
![](https://i.ytimg.com/vi/j09EQ-xlh88/hq720.jpg)
Learn What is Database | Types of Database | DBMS
![](https://i.ytimg.com/vi/6GebEqt6Ynk/hq720.jpg)
Choosing a Database for Systems Design: All you need to know in one video
![](https://i.ytimg.com/vi/0buKQHokLK8/hq720.jpg)
How do NoSQL databases work? Simply Explained!
![](https://i.ytimg.com/vi/eQ3eNd5WbH8/hq720.jpg)
How indexes work in Distributed Databases, their trade-offs, and challenges
![](https://i.ytimg.com/vi/3zw3PWP46Yc/hq720.jpg?sqp=-oaymwEmCIAKENAF8quKqQMa8AEB-AH-CYAC0AWKAgwIABABGH8gEyhKMA8=&rs=AOn4CLAQlzr_POj2CJYmsLqGgq0ZM_RxYg)
How to Build a Streaming Database in Three Challenging Steps | Materialize
5.0 / 5 (0 votes)