Improve API Performance by Utilizing the Platform Cache
Summary
TLDRIn this video, Phil Bergner, a Salesforce ecosystem veteran, discusses enhancing API performance through the use of platform cache. He shares his experience developing apps like Eureka, designed for field service teams needing offline data access. Bergner emphasizes the importance of optimizing data syncing for mobile devices with limited connectivity. He explains the Lightning Platform cache, its types (session and org), and how it can speed up data retrieval, reducing API calls and CPU time. He also covers cache management, best practices, and the impact of caching on performance, using benchmark data from his product to illustrate the significant time and resource savings achieved.
Takeaways
- 🚀 **Platform Cache Introduction**: The talk introduces the concept of utilizing the Salesforce platform cache to enhance API performance.
- 👤 **Speaker Background**: Phil Bergner, with 11 years of experience in the Salesforce ecosystem, shares his insights from Professional Services to Product Development.
- 📈 **Performance Benchmarking**: Phil discusses the importance of benchmarking and how platform cache can be leveraged to improve data retrieval times, especially for offline mobile usage.
- 🔒 **Data Syncing Model**: The 'briefcase model' is highlighted, emphasizing the need for field service technicians to have all necessary data primed before going offline.
- 📱 **Mobile Device Considerations**: The talk addresses the challenges of mobile device connectivity speeds and memory limitations, underscoring the need for optimization.
- 💾 **Understanding the Platform Cache**: The platform cache is explained as a layer that stores data for faster retrieval compared to traditional methods.
- 🔑 **Cache Types**: Different cache types are discussed, including session cache (user-specific) and order cache (applicable across users), along with their time-to-live (TTL) considerations.
- 🛠️ **Cache Implementation Steps**: Steps for implementing cache include deciding on cache type, considering data freshness versus performance trade-offs, and managing cache limits.
- 📊 **Cache Management and Diagnostics**: Salesforce provides tools for managing and diagnosing cache usage, including visualizations of cache data and partition allocations.
- 🔄 **Cache Refresh Strategy**: Strategies for refreshing cache data are discussed, including handling cache misses and the use of feature flags for toggling cache usage.
- 📈 **Benchmark Results**: The talk shares benchmark results, showing significant reductions in sync time and CPU usage when using the platform cache.
Q & A
What is the main topic of the video?
-The main topic of the video is improving API performance by utilizing the Salesforce platform cache.
Who is the speaker in the video?
-The speaker in the video is Phil Bergner, who has been in the Salesforce ecosystem for about 11 years.
What is Phil Bergner's background in the Salesforce ecosystem?
-Phil Bergner initially worked on Professional Services doing Salesforce implementations and later pivoted to product development, working on app exchange apps, including a Healthcare-focused app and most recently, Eureka.
What is Eureka and how does it relate to the platform cache?
-Eureka is a Salesforce native mobile work solution for guided procedures, processes, and collecting data offline, primarily used by field service teams. It utilizes a briefcase model and the platform cache to optimize data syncing for offline use.
What are the benefits of using the platform cache as discussed in the video?
-The platform cache provides faster performance by caching data, reducing the time and memory required to compile complex data structures, and speeding up data retrieval from external systems.
What is the trade-off mentioned for using the platform cache?
-The trade-off for using the platform cache is that it may not necessarily contain live data, so there's a balance between performance and the need for the absolute latest data.
What are the different types of cache mentioned in the video and their uses?
-The video mentions session cache, which is user-specific data, and organization cache, which is applicable across many users. The choice depends on whether the data is specific to a user or broadly applicable.
What is TTL in the context of the platform cache?
-TTL stands for Time To Live and refers to the lifespan of an item in the cache, which for session cache is up to 8 hours or when the session expires, and for organization cache can be configured up to a maximum of 48 hours.
What are some considerations when using the cache as outlined in the video?
-Considerations include ensuring data safety by not treating cache as permanent storage, optimizing usage by storing a few large items rather than many small ones, and being aware of the default limits for different cache types.
How can one manage and diagnose the platform cache usage in Salesforce?
-Salesforce provides visualizations and functionality for managing and diagnosing cache usage, including dashboards showing cache utilization, breakdowns by cache type, and the ability to purge individual records or refresh the entire cache.
What was the outcome of the benchmark scenario presented in the video?
-The benchmark scenario showed that utilizing the platform cache reduced full sync time by 5 seconds and decreased the average number of queries per request from 25 to 7, significantly improving performance.
Outlines
🚀 Introduction to Platform Cache for API Performance
The speaker, Phil Bergner, introduces the topic of enhancing API performance using the Salesforce platform cache. He shares his professional background, including 11 years of experience with Salesforce, moving from Professional Services to product development. Phil discusses his work on app exchange apps, particularly Eureka, which is a mobile workforce solution for guided procedures and offline data collection. He emphasizes the importance of optimizing data sync for field service teams, who rely on the Salesforce mobile app's briefcase model to prepare for fieldwork with potentially limited connectivity. The platform cache is presented as a solution to improve performance by caching data for faster retrieval, especially beneficial for complex, memory-intensive data compilation.
🛠️ Understanding and Utilizing Platform Cache
Phil explains the concept of the Lightning Platform cache, which offers faster data retrieval compared to traditional methods. He outlines scenarios where caching is beneficial, such as compiling complex data or retrieving data from external systems. The trade-off between performance and data freshness is discussed, highlighting that cached data might not be live. The session and order caches are introduced, with session cache being user-specific and order cache applicable across multiple users. The time to live (TTL) for cache items is also explained, with session cache lasting up to 8 hours and order cache configurable up to 48 hours. Phil advises considering the implications of caching, such as data loss and optimization strategies, and to be aware of the default limits for different cache types, especially the 3 megabytes of free capacity for App Exchange apps.
📊 Salesforce Platform Cache Management and Diagnostics
The paragraph discusses the visualization and management tools available within Salesforce for monitoring platform cache usage. It mentions the ability to see cache utilization, break down by cache type, and set up partitions for different use cases. Phil also covers the functionality to purge individual records or refresh the entire cache, and the diagnostic data available, such as record-by-record cache content, namespace, size, access count, and last access time. He emphasizes the importance of this data for optimizing cache usage and making code adjustments based on actual usage patterns.
📈 Benchmarking Platform Cache Performance
Phil presents a benchmark scenario where he tested the performance of the platform cache with his product code. The test involved syncing a large amount of data for offline use by a field service technician. He details the process, from receiving requests to sending responses, and the CPU time usage. The results showed a significant reduction in sync time and CPU usage when using the platform cache. Phil also discusses the code implementation, including feature flags for cache utilization and lifespan, handling cache misses, and exceptions. He concludes with a simpler example using Salesforce's Cache Builder interface for beginners and additional considerations for cache management.
🔗 Helpful Resources and Conclusion
In the final paragraph, Phil provides helpful resources for learning more about the platform cache, including the Apex developers guide and Trailhead modules. He summarizes the benefits of using the platform cache, such as reduced full sync time and decreased CPU usage per request. Phil also mentions the importance of considering data refresh strategies, handling cache misses, and understanding cache visibility and eviction policies. The conclusion wraps up the discussion on the platform cache and its impact on API performance.
Mindmap
Keywords
💡API Performance
💡Platform Cache
💡Field Service Teams
💡Briefcase Model
💡Time to Live (TTL)
💡Session Cache
💡Order Cache
💡Cache Misses
💡Feature Flags
💡Cache Eviction
Highlights
Introduction to improving API performance using platform cache.
Speaker Phil Bergner's background in Salesforce ecosystem and product development.
Discussion on the importance of offline data for field service teams.
Challenges with memory-intensive processes in mobile devices.
Definition and benefits of the Lightning Platform cache for faster data retrieval.
Use case scenarios for platform cache including complex and frequently used data.
Considerations for using cache such as data freshness and performance trade-offs.
Steps to decide between session cache and order cache based on use case.
Explanation of Time to Live (TTL) for cache items.
Development best practices including safety and data loss prevention.
Optimizing cache usage by storing fewer large items rather than many small ones.
Details on cache limits and capacities for different org types.
Visualization and management of cache usage within Salesforce.
Ability to purge individual records or refresh the entire cache.
Benchmark scenario demonstrating the impact of platform cache on performance.
Code example and explanation of how to implement platform cache in Apex.
Use of feature flags for toggling cache usage and customizing cache lifespan.
Handling cache misses and fallback to traditional data retrieval methods.
Results of benchmark tests showing significant performance improvements with cache.
Conclusion on the practical applications and benefits of platform cache for end users.
Introduction to Salesforce's Cache Builder interface for easier cache implementation.
Additional considerations for cache management and data handling.
Helpful resources for learning more about platform cache.
Transcripts
[Music]
all right so today we're going to be
talking about improving API performance
by utilizing the platform cache it's
definitely a mouthful but I think it's a
really powerful feature and we're going
to dive into things so buckle
up all right let me tell you a little
bit about myself my name is Phil bergner
I've been in the Salesforce ecosystem
for about 11 years now the first few
years I worked on the Professional
Services doing Salesforce
implementations after that I pivoted
to uh the product development side where
I've been working on some app exchange
apps uh Healthcare focused one and most
recently
Eureka so I want to tell you a little
bit about what we do because this is
what led me to utilize a platform cache
we're going and looking at some
Benchmark data and some kind of
truncated code that that we utilize for
the platform cast so I think this is
important background um where a
Salesforce native mobile work for
solution for guided procedures processes
and collect data offline so we work a
lot with field service teams and I want
to focus on that even offline part
because that's the most critical part
because just like Salesforce field
service mobile app Salesforce mobile app
we utilize a briefcase model so the idea
there is that if I'm a field service
tech and I'm getting ready to go out in
the field I'm going to need all my data
prime before I go right so I may have no
connectivity intermittent connectivity
not really so I need to have all that
data available just in case I need it so
they'll be syncing data like work orders
accounts work order line items service
appointments product cataloges so we
have customers that are sinking over
100,000 records just for a single day
just because they're not sure exactly
what they're going to need and as you
can imagine that's a pretty memory
intensive time intensive process and
then you're talking about a mobile
device connectivity speeds um we really
are trying to optimize it as much as
possible and that's what led us to
explore the platform cache trying to eek
out every little bit of performance we
can so what is the lightning platform
cache it provides faster performance it
is a cach layer really the idea is there
you can put this data in the cache and
then when you retrieve it it's going to
be faster than if you were to do it
through traditional methods so um for
our scenario that first bullet there
complex data that's time and memory
intensive to compile so for our scenario
we running about 25 socle queries and
compiling a very large Json structure
that was going to be returned to the
nend user um so some other scenarios
could be data retrieve from an external
system so maybe you have an external
Billing System so they're reaching your
API you have to call it to another API
and get that data there's a lot of
latency involved there if you can get
retrieve that data and put it in the
cash it's going to be able to return it
a lot faster and really just identifying
any frequently used scenarios where
you're running the same process over and
over it's kind of memory intensive like
I've been saying um that's all right for
utilizing the platform cache I think the
important consideration there is when
it's cach it's not necessarily live data
right so that's an important trade off
is performance versus how often is that
data changing how important is it that
it's the absolute latest
data all right so kind of just you've
gone through your process figured out oh
you know I found an area where I might
want to utilize the platform form cache
what are my next steps so the first
thing you want to decide is which type
of cach makes sense for your use case it
could either be the session cache the
order cache or some combination of both
uh the session cache is really user
specific data it's only going to be
served for a specific user so think like
user record information like time zone
preferred currency stuff like that and
the order cache is really data that's
going to be applicable across many users
so for our use case that's going to be
they're out in the field they're doing
things like home health uh Home Health
assessments safety inspection forms like
those forms and that type of data is
going to be accessible for a broad set
of users right it's not for a specific
user they may be on a different site or
capturing different data but the form
itself isn't changing so that was a
perfect example for us to utilize the
orache um one other thing I want to
point out here is that the time to live
it's often abbreviated like TTL and some
of the documentation that's really just
the lifespan of that item in the cash so
for the session cach that's about up to
8 hours or whenever the session expires
and for the orach you can configure that
as well but it can be up to a maximum of
48
Hours what are some considerations if
for using the cache one is be safe I
think that's a pretty good development
practice in general maybe there's some
like Cowboys who don't believe in that
and just go straight to Pride but um
what I'm talking about specifically here
are there's no guarantee against data
loss from the cach right it's not a
permanent storage solution so don't put
data in the cach like oh it's good to go
it's going to be there it's serve up
quicker but like we just said caches
expire that data isn't permanent you
need a persistent storage solution for
where you're storing your data um
optimize usage this is a tip from some
of the Salesforce documentation but
basically saying it's better to store a
few small items than putting a whole lot
of small sorry it's better to store um
just few large items than a whole bunch
of small items
and then the other call out I want to
make here is some of the default limits
that are available for different or
types um the most interesting one for me
was the app exchange apps and getting 3
megabytes of Provider free capacity for
us that unlocked a lot of functionality
because we're an app exchange app and
going into a customer's or you're not
sure what org type they're going to have
how much dat how much of the cash could
be utilized by other app exchange apps
or custom code in their org so being
able to be installed in a subscriber org
and getting three me gab of capacity we
know that we have at least that much um
and we can kind of control that
ourselves and if they want to augment
that with more even
better so I want to walk through some of
the visualizations and functionality
that's available inside Salesforce for
managing and uh exp diagnostic data for
utilizing the platform cache you see
here this is an example of a cache
that's being utilized you can see here
there's some of this dashboard
information for how much of the cach is
being utilized what's the break Down For
What typee of uh cash you're using here
it's all dedicated to that app exchange
free capacity and then partition
allocations you can set up different
partitions for the cach for different
use cases right so if you have 10
megabytes you could dedicate two to a
specific use case and a specific
partition dedicate eight to a different
use case or however you want to break it
down so if you were to click into a
specific partition this is what you
would see in this scenario we only have
one partition so it's utilizing all of
it and then this what again is our
metric for how much of the orcat we're
currently
using this one is very a little crazy
looking I know but I just want to show
that this visualization is available
basically a pie chart breakdown of how
your cache is being utilized I think the
next screen is going to be a little bit
easier um in the UI once you scroll down
this is what you're going to see this is
really a breakdown of what's currently
in your cache like on a record by record
basis so uh what the way you work with
the cach you basically put a key it's a
key value pair like an apex map however
you want to think of it
you put a record in there you're going
to access it by that key and here you
can see the specific records that have
been added to this cache the name space
that they exist in whether it's the
local name space or a package name space
uh the size so this is the size as it
exists in the cache so there is a
compression algorithm going on I don't
know exactly what it is but it does seem
to chunk stuff down to like pretty small
sizes so it's probably be significantly
smaller than you might expect and then
there's some other metrics here too so
the last time it was access the access
count you know how many times it's been
touched so this is all really useful
data to figure out how your cach is
being utilized you can make changes to
your code based on you know oh I thought
people were going to be hitting this all
the time they're really not you can make
tweaks and adjustments there you can
also access a good amount of this type
of data um some metrics like
programmatically as well through like
Anonymous Apex or if you were to set up
a custom dashboard you can do that as
well um the other thing I want to point
out here is that you also have the
ability to delete individual like Purge
individual records from the cache here
on the the side column there you also
have the ability on the main screen to
just refresh the entire cach if you just
like hey I know something significant is
changed I don't want anyone getting
stale data you can come in there and
manually push all that data
out all right so let's talk a little bit
about this Benchmark scenario that I ran
so this was with our product code and
this is really just an intense use case
where I'm a SE I'm a safety Tech I'm
getting ready to go out in the field I
have to sync 350 templates 70 forms and
I need all this data available offline
keeping in mind that some of these
templates can be over a Meg of Json so
it really is I wanted to kind of push it
to the limits to see get some really
good data for how performance was going
to change um and then one thing that my
my preference for like this type of data
is I really want it to be production
code not just like demo code that's kind
of contrived or isn't a really good real
world test so I ran this using our uh
actual product code and I'm going to
walk through kind of a strip down
version of what that looks like in just
a few minutes
here all right so let me hop into that
real
quick so if you don't know a lot of Apex
that's okay I'm going to walk through
this a little bit and then a few minutes
later I'm going to show you a better
kind of starting point but I wanted to
work walk through a really kind of
stripped down example of the code that
we're running in order to capture some
of this Benchmark data so you're going
to have in our scenario a rest endpoint
set up it's been calling this helper
class which is going to query some data
in our scenario it's templates but it
could easily be work orders accounts
whatever type of data you have living in
the cache um one best practice that
we've kind of learned over the years is
to utilize a lot of feature Flags
because you may go into a customer's or
like oh I'm sure I know how they're
going to use it and you're completely
wrong and they want to use it completely
differently or they want to tweak Things
based on their specific use case so uh
we utilize feature Flags one of them
here is really just a custom setting
that's defining whether they want to
utilize the cach at all so we're making
this feature available they may say hey
I don't want any chance of like stale
data being served my customers you know
they're all on good connections or we're
not sinking a lot of data we don't want
to utilize it at all it's really just a
checkbox that then come in and turn it
on or off um even if you're not working
with like a package it still could be
really useful as kind of a kill switch
so if you were to roll something out
production things are going a little Ary
you could just log in uncheck it and it
was just you could write your code to
kind of skip over the cash and just not
utilize it at all um so that's what
we're checking here with this Buon we're
saying is that set should we be
utilizing the cash and the other
customization we give users is the
ability to define the lifespan for items
in the cash so once again this could be
like a customer by customer
configuration where um maybe some are
only updating their templates every
couple days maybe some are hitting all
the time they can determine that
lifespan and we're going to respect that
um this is really just a map of the
structure that we're going to return to
the end user um this is a we're going to
kind of keep track of the templates that
we don't find in the cash so I think
this is a really important part of the
design pattern is there can be cash
misses there can be data you expect to
be in the cash that isn't you should
have a fallback plan set up so that uh
if it's not in the cach that you're
still serving this data
up so we're going to come in here so
we've determined if we should use the
cach if we have we're going to go ahead
and access our partition that's what
we're doing here going to do cache. org.
partition based on our namespace and
retrieve that partition then going to
iterate through all the IDS that we want
so once again this could be templates it
could be accounts whatever you want
we're going to attempt to access that
item and then we're going to determine
if we found it or not so in this
scenario we're just checking if it's not
null if it is then we found something in
the cache we can go ahead in this
scenario I'm just deserializing some
Json casting into the structure that I
want and then put it in my map to return
to the end user basically the gist of it
is we found it we're going to put it in
our structure and we're going to this
one's good to go back to the the API
recipient if it's not found we're going
to keep track of those two and obviously
we're not utilizing the cache then we
didn't find any in the cach so if we
didn't we're going to basically do what
we would assume your code would do
normally right is like this can be like
a helper method just go do all the grunt
work all your queries compile the
structures put them all together um
that's what we're kind of abstracting
away there we're we're going to go ahead
and put in our data to
return and then uh if we're going to go
ahead and retrieve this data we've done
all the grunt work it wasn't previously
in the cach if the cach is enabled let's
put it in there right so that time if
this person syncs again or if the next
person's coming online they need to sync
that same data it's going to be in the
cach and good to go so that's what we're
doing with this statement it's just like
a map you're just going to do a a put
statement in this scenario I'm
serializing it and then this last
perimeter here is just our lifespan for
how long we want this to persist in the
cach um one other thing to be aware of
is there are a couple specific
exceptions that you may want to handle
um you can decide how you want to handle
this best for for your use case but um
do a try catch and then like item size
limit exceeded there's a whole list of
them in the Apex documentation for other
exceptions and decide how best you want
to handle
those and that's it so then we we found
it was either in the cacher it wasn't if
it wasn't we put it there and we're
going to return that to the end
user
all right so I'm going to do a
visualization this the end but I least
want to show some of the raw data here
so in this scenario I ran 12 requests
basically for some of this data that's
the the leftmost column the start column
is the time that the endpoint received
the request right so I'm trying to
exclude all any like mobile latency or
anything this is when the first request
hit the server and then the next column
is when the response was sent to the
mobile app and the elas is the CPU time
usage for like how long that took on the
server side um there is a little bit of
like our mobile app does a little bit of
smart batching here based on like
expected payload sizes that's why the
first six I think kick off at the same
time but um we do have some uh data down
here at the bottom once again I'm going
show a visualization this in a second
but from start to end it was 17 seconds
from when we received the first request
to when the last response was set and
then average CPU usage time of 9 seconds
what what about with the platform cache
so see here I ran the exact same tests
um same similar data but the start to
end time was 12 seconds the average CPU
time was six so let me just show you a
visualization of that real quick so you
can see here that utilizing the cach it
was at least this fast in some cases we
saved significant amount of time with
the cach going to go ahead and jump
to the conclusion so from the end users
perspective the full sync time was
reduced by 5 Seconds from 177 to 12 like
I said we're moving a lot of data back
and forth across the mobile connection
but you can imagine if you're a tech
you're at your office you're at your
house you're ready to go out for the day
you push that button you're waiting for
that green light to come through that
you have all your data every second
counts so it was a pretty significant
Improvement I feel like for um not very
complex code right it didn't go back and
completely rewrite stuff I'm just
basically trying to access in the cache
if it's not I'm putting it there and if
not I'm just going about my codee the
way it normally would um the other thing
I want to point out here is also per
request we went from 25 queries down to
seven and the only reason we still have
that many is because there's some data
that we're retrieving always in real
time stuff like could be like patient
medication data right like you don't
want to go out there with stale data so
we still have some data that we're
retrieving in real time that we would
never want to serve from the cash and
the average CPU time was reduced by 3
seconds as well so if you're familiar
with like platform limits for like
number of sole queries you know CPU
usage too you're also buying yourself a
little bit more Breathing Room depending
on how close you are to hitting those
limits
already so like I said earlier I want to
show just a simpler example like if
you're just getting started want to try
something out uh Salesforce has this
cash Builder interface available and
that's what this is an example of um
it's really pretty straightforward this
example is really similar to the example
they have in their documentation I just
wanted to kind of keep with the theme of
retrieving template data just to show uh
kind of how it could work Works slightly
differently um so you can see down at
the bottom this is the usage right so
this is how you would retrieve data so
using the cash class you're going to
retrieve it you're telling it what type
of data it is once again this could be
anything this could be accounts could be
work orders and whatever type of data
you have in there and the way it works
is the implementation above when you do
that get to retrieve the record it's
going to determine if it's in the cach
if it is it's going to serve that up if
it's not it's going to run this D load
method you just tell it what you want to
do in order to retrieve that data if
it's not in the cache and it will
automatically put it in the cach for you
so once again like depending on how much
like fine grain control you want over
when you're serving the cash and what
data is going into the cash and when
this is a really great starting point
and kind of uh abstracts that away and
says let me handle if it's in the cash
or not you just write the information
tell me what you want to do if it's not
in the cash and I'll handle where it
comes
from uh what are some additional
considerations some some of these we
already hit on a little bit but you
consider when refreshing data what type
of data is it how stale can it be
consider using like I said like time to
live for the cash or even Dynamic
refresh options so like I said like
programmatically you go Apex say hey
something changed and using the trigger
you go hey I'm going to purge this
record from the cache um there's a lot
of functionality available there for
dynamically handling that and the other
one is like I said like handling cach
misses so if the data isn't in the cach
make sure your code is set up to handle
that
scenario a couple other ones like I
mentioned earlier the cach DAT is
compressed even though it may not seem
like a whole lot of space it might be
more than you're expecting um cash
visibility is really if you are in a
managed package you have the ability to
set if that cash can be accessed by code
outside of your package or not so if
you're in there with custom code maybe
they have code that's going to interact
with the cache as well you can allow
that functionality and also cache
eviction and least recently used
algorithm it's basically how it
determines if you're at capacity and
your cash is full you go to put
something in there what happens um
basically the least recently used item
is the one that's pushed out in order to
make room for new uh cash
values all right so a couple helpful
links here one is I definitely utilized
Apex developers guide a lot Trailhead
was also super helpful um a lot of good
data there to help get you started with
the platform
cache all right well thank you so much
hopefully this was help for everyone got
a good chance to get a good
understanding of the platform cache and
how it
works
Weitere ähnliche Videos ansehen
Lecture 29 : MEMORY HIERARCHY DESIGN (PART 2)
What are Distributed CACHES and how do they manage DATA CONSISTENCY?
4. OCR GCSE (J277) 1.1 Characteristics of CPUs
CPU and Its Components|| Components of MIcroprocessor
Caching demystified: Inspect, clear, and disable caches #DevToolsTips
EMnify: Building a Cloud Native Mobile Network for IoT Leveraging AWS's Global Infrastructure
5.0 / 5 (0 votes)