Improve API Performance by Utilizing the Platform Cache

Salesforce Developers
16 Feb 202419:38

Summary

TLDRIn this video, Phil Bergner, a Salesforce ecosystem veteran, discusses enhancing API performance through the use of platform cache. He shares his experience developing apps like Eureka, designed for field service teams needing offline data access. Bergner emphasizes the importance of optimizing data syncing for mobile devices with limited connectivity. He explains the Lightning Platform cache, its types (session and org), and how it can speed up data retrieval, reducing API calls and CPU time. He also covers cache management, best practices, and the impact of caching on performance, using benchmark data from his product to illustrate the significant time and resource savings achieved.

Takeaways

  • ๐Ÿš€ **Platform Cache Introduction**: The talk introduces the concept of utilizing the Salesforce platform cache to enhance API performance.
  • ๐Ÿ‘ค **Speaker Background**: Phil Bergner, with 11 years of experience in the Salesforce ecosystem, shares his insights from Professional Services to Product Development.
  • ๐Ÿ“ˆ **Performance Benchmarking**: Phil discusses the importance of benchmarking and how platform cache can be leveraged to improve data retrieval times, especially for offline mobile usage.
  • ๐Ÿ”’ **Data Syncing Model**: The 'briefcase model' is highlighted, emphasizing the need for field service technicians to have all necessary data primed before going offline.
  • ๐Ÿ“ฑ **Mobile Device Considerations**: The talk addresses the challenges of mobile device connectivity speeds and memory limitations, underscoring the need for optimization.
  • ๐Ÿ’พ **Understanding the Platform Cache**: The platform cache is explained as a layer that stores data for faster retrieval compared to traditional methods.
  • ๐Ÿ”‘ **Cache Types**: Different cache types are discussed, including session cache (user-specific) and order cache (applicable across users), along with their time-to-live (TTL) considerations.
  • ๐Ÿ› ๏ธ **Cache Implementation Steps**: Steps for implementing cache include deciding on cache type, considering data freshness versus performance trade-offs, and managing cache limits.
  • ๐Ÿ“Š **Cache Management and Diagnostics**: Salesforce provides tools for managing and diagnosing cache usage, including visualizations of cache data and partition allocations.
  • ๐Ÿ”„ **Cache Refresh Strategy**: Strategies for refreshing cache data are discussed, including handling cache misses and the use of feature flags for toggling cache usage.
  • ๐Ÿ“ˆ **Benchmark Results**: The talk shares benchmark results, showing significant reductions in sync time and CPU usage when using the platform cache.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is improving API performance by utilizing the Salesforce platform cache.

  • Who is the speaker in the video?

    -The speaker in the video is Phil Bergner, who has been in the Salesforce ecosystem for about 11 years.

  • What is Phil Bergner's background in the Salesforce ecosystem?

    -Phil Bergner initially worked on Professional Services doing Salesforce implementations and later pivoted to product development, working on app exchange apps, including a Healthcare-focused app and most recently, Eureka.

  • What is Eureka and how does it relate to the platform cache?

    -Eureka is a Salesforce native mobile work solution for guided procedures, processes, and collecting data offline, primarily used by field service teams. It utilizes a briefcase model and the platform cache to optimize data syncing for offline use.

  • What are the benefits of using the platform cache as discussed in the video?

    -The platform cache provides faster performance by caching data, reducing the time and memory required to compile complex data structures, and speeding up data retrieval from external systems.

  • What is the trade-off mentioned for using the platform cache?

    -The trade-off for using the platform cache is that it may not necessarily contain live data, so there's a balance between performance and the need for the absolute latest data.

  • What are the different types of cache mentioned in the video and their uses?

    -The video mentions session cache, which is user-specific data, and organization cache, which is applicable across many users. The choice depends on whether the data is specific to a user or broadly applicable.

  • What is TTL in the context of the platform cache?

    -TTL stands for Time To Live and refers to the lifespan of an item in the cache, which for session cache is up to 8 hours or when the session expires, and for organization cache can be configured up to a maximum of 48 hours.

  • What are some considerations when using the cache as outlined in the video?

    -Considerations include ensuring data safety by not treating cache as permanent storage, optimizing usage by storing a few large items rather than many small ones, and being aware of the default limits for different cache types.

  • How can one manage and diagnose the platform cache usage in Salesforce?

    -Salesforce provides visualizations and functionality for managing and diagnosing cache usage, including dashboards showing cache utilization, breakdowns by cache type, and the ability to purge individual records or refresh the entire cache.

  • What was the outcome of the benchmark scenario presented in the video?

    -The benchmark scenario showed that utilizing the platform cache reduced full sync time by 5 seconds and decreased the average number of queries per request from 25 to 7, significantly improving performance.

Outlines

00:00

๐Ÿš€ Introduction to Platform Cache for API Performance

The speaker, Phil Bergner, introduces the topic of enhancing API performance using the Salesforce platform cache. He shares his professional background, including 11 years of experience with Salesforce, moving from Professional Services to product development. Phil discusses his work on app exchange apps, particularly Eureka, which is a mobile workforce solution for guided procedures and offline data collection. He emphasizes the importance of optimizing data sync for field service teams, who rely on the Salesforce mobile app's briefcase model to prepare for fieldwork with potentially limited connectivity. The platform cache is presented as a solution to improve performance by caching data for faster retrieval, especially beneficial for complex, memory-intensive data compilation.

05:01

๐Ÿ› ๏ธ Understanding and Utilizing Platform Cache

Phil explains the concept of the Lightning Platform cache, which offers faster data retrieval compared to traditional methods. He outlines scenarios where caching is beneficial, such as compiling complex data or retrieving data from external systems. The trade-off between performance and data freshness is discussed, highlighting that cached data might not be live. The session and order caches are introduced, with session cache being user-specific and order cache applicable across multiple users. The time to live (TTL) for cache items is also explained, with session cache lasting up to 8 hours and order cache configurable up to 48 hours. Phil advises considering the implications of caching, such as data loss and optimization strategies, and to be aware of the default limits for different cache types, especially the 3 megabytes of free capacity for App Exchange apps.

10:01

๐Ÿ“Š Salesforce Platform Cache Management and Diagnostics

The paragraph discusses the visualization and management tools available within Salesforce for monitoring platform cache usage. It mentions the ability to see cache utilization, break down by cache type, and set up partitions for different use cases. Phil also covers the functionality to purge individual records or refresh the entire cache, and the diagnostic data available, such as record-by-record cache content, namespace, size, access count, and last access time. He emphasizes the importance of this data for optimizing cache usage and making code adjustments based on actual usage patterns.

15:03

๐Ÿ“ˆ Benchmarking Platform Cache Performance

Phil presents a benchmark scenario where he tested the performance of the platform cache with his product code. The test involved syncing a large amount of data for offline use by a field service technician. He details the process, from receiving requests to sending responses, and the CPU time usage. The results showed a significant reduction in sync time and CPU usage when using the platform cache. Phil also discusses the code implementation, including feature flags for cache utilization and lifespan, handling cache misses, and exceptions. He concludes with a simpler example using Salesforce's Cache Builder interface for beginners and additional considerations for cache management.

๐Ÿ”— Helpful Resources and Conclusion

In the final paragraph, Phil provides helpful resources for learning more about the platform cache, including the Apex developers guide and Trailhead modules. He summarizes the benefits of using the platform cache, such as reduced full sync time and decreased CPU usage per request. Phil also mentions the importance of considering data refresh strategies, handling cache misses, and understanding cache visibility and eviction policies. The conclusion wraps up the discussion on the platform cache and its impact on API performance.

Mindmap

Keywords

๐Ÿ’กAPI Performance

API Performance refers to the efficiency and speed at which an application programming interface (API) processes and returns data. In the context of the video, improving API performance is crucial for the Salesforce ecosystem, particularly for mobile applications that rely on data syncing for field service operations. The video discusses how utilizing the platform cache can enhance API performance by reducing data retrieval times and CPU usage, which is essential for mobile apps operating in offline or low-connectivity scenarios.

๐Ÿ’กPlatform Cache

The Platform Cache in Salesforce is a feature that stores data temporarily to improve data retrieval speed. The video emphasizes the importance of the platform cache for optimizing mobile application performance, especially for apps like Eureka that deal with large volumes of data. The cache can store data that is frequently accessed, reducing the need to repeatedly query the database and thus speeding up response times for end-users.

๐Ÿ’กField Service Teams

Field Service Teams are groups of professionals who perform services at customer locations, often requiring access to real-time data for their tasks. The video discusses how the platform cache can be particularly beneficial for these teams, as they often work in environments with limited or intermittent connectivity. The cache ensures that essential data is available offline, which is critical for their operations.

๐Ÿ’กBriefcase Model

The Briefcase Model in Salesforce is a data synchronization mechanism used by mobile applications to prepare data for offline use. As explained in the video, field service technicians use this model to 'prime' their mobile devices with necessary data before going offline. The platform cache can complement this model by storing frequently accessed data, reducing the time and memory required to compile this data when offline access is needed.

๐Ÿ’กTime to Live (TTL)

Time to Live (TTL) in the context of the platform cache refers to the duration for which data is stored in the cache before it is considered stale and is removed or refreshed. The video highlights the importance of TTL in managing cache effectiveness, as it balances between performance gains from caching and the need for up-to-date data. The speaker discusses configuring TTL for session cache and order cache to suit different use cases.

๐Ÿ’กSession Cache

Session Cache in Salesforce is a type of cache that stores user-specific data, valid only for the duration of a user's session. The video explains that session cache is suitable for data that is unique to an individual user, such as their timezone or preferred currency. It is one of the caching strategies discussed for improving API performance, particularly for data that does not need to be accessed by multiple users.

๐Ÿ’กOrder Cache

Order Cache, as mentioned in the video, is another type of cache in Salesforce that stores data applicable across many users. Unlike Session Cache, Order Cache is ideal for data that does not change frequently and can be accessed by multiple users, such as forms and templates used by field service teams. The video uses the example of Home Health assessments to illustrate the use of Order Cache for data that is consistent and required by various users.

๐Ÿ’กCache Misses

A cache miss occurs when the requested data is not found in the cache and must be retrieved from the primary data source, such as a database. The video discusses the importance of planning for cache misses in application design, ensuring that there is a fallback mechanism to retrieve data from the original source if it is not available in the cache. This is crucial for maintaining application functionality and performance even when the cache does not have the required data.

๐Ÿ’กFeature Flags

Feature Flags are used in software development to toggle features on or off without deploying new code. In the video, the speaker mentions the use of feature flags to control the use of the platform cache, allowing customers to enable or disable caching based on their specific needs or network conditions. This provides flexibility and control over how the cache is used within the Salesforce ecosystem.

๐Ÿ’กCache Eviction

Cache Eviction is the process of removing data from the cache to make room for new data when the cache reaches its capacity. The video touches on the least recently used (LRU) algorithm used by Salesforce for cache eviction, which removes the least recently accessed items first. Understanding cache eviction is important for managing the cache's effectiveness and ensuring that the most relevant data is kept available for end-users.

Highlights

Introduction to improving API performance using platform cache.

Speaker Phil Bergner's background in Salesforce ecosystem and product development.

Discussion on the importance of offline data for field service teams.

Challenges with memory-intensive processes in mobile devices.

Definition and benefits of the Lightning Platform cache for faster data retrieval.

Use case scenarios for platform cache including complex and frequently used data.

Considerations for using cache such as data freshness and performance trade-offs.

Steps to decide between session cache and order cache based on use case.

Explanation of Time to Live (TTL) for cache items.

Development best practices including safety and data loss prevention.

Optimizing cache usage by storing fewer large items rather than many small ones.

Details on cache limits and capacities for different org types.

Visualization and management of cache usage within Salesforce.

Ability to purge individual records or refresh the entire cache.

Benchmark scenario demonstrating the impact of platform cache on performance.

Code example and explanation of how to implement platform cache in Apex.

Use of feature flags for toggling cache usage and customizing cache lifespan.

Handling cache misses and fallback to traditional data retrieval methods.

Results of benchmark tests showing significant performance improvements with cache.

Conclusion on the practical applications and benefits of platform cache for end users.

Introduction to Salesforce's Cache Builder interface for easier cache implementation.

Additional considerations for cache management and data handling.

Helpful resources for learning more about platform cache.

Transcripts

play00:01

[Music]

play00:05

all right so today we're going to be

play00:06

talking about improving API performance

play00:09

by utilizing the platform cache it's

play00:11

definitely a mouthful but I think it's a

play00:13

really powerful feature and we're going

play00:15

to dive into things so buckle

play00:18

up all right let me tell you a little

play00:20

bit about myself my name is Phil bergner

play00:23

I've been in the Salesforce ecosystem

play00:25

for about 11 years now the first few

play00:27

years I worked on the Professional

play00:29

Services doing Salesforce

play00:31

implementations after that I pivoted

play00:34

to uh the product development side where

play00:37

I've been working on some app exchange

play00:38

apps uh Healthcare focused one and most

play00:41

recently

play00:42

Eureka so I want to tell you a little

play00:44

bit about what we do because this is

play00:46

what led me to utilize a platform cache

play00:48

we're going and looking at some

play00:49

Benchmark data and some kind of

play00:52

truncated code that that we utilize for

play00:54

the platform cast so I think this is

play00:56

important background um where a

play00:58

Salesforce native mobile work for

play01:00

solution for guided procedures processes

play01:02

and collect data offline so we work a

play01:04

lot with field service teams and I want

play01:06

to focus on that even offline part

play01:08

because that's the most critical part

play01:11

because just like Salesforce field

play01:12

service mobile app Salesforce mobile app

play01:15

we utilize a briefcase model so the idea

play01:18

there is that if I'm a field service

play01:20

tech and I'm getting ready to go out in

play01:21

the field I'm going to need all my data

play01:24

prime before I go right so I may have no

play01:26

connectivity intermittent connectivity

play01:29

not really so I need to have all that

play01:31

data available just in case I need it so

play01:34

they'll be syncing data like work orders

play01:37

accounts work order line items service

play01:39

appointments product cataloges so we

play01:41

have customers that are sinking over

play01:43

100,000 records just for a single day

play01:46

just because they're not sure exactly

play01:48

what they're going to need and as you

play01:49

can imagine that's a pretty memory

play01:51

intensive time intensive process and

play01:55

then you're talking about a mobile

play01:56

device connectivity speeds um we really

play02:00

are trying to optimize it as much as

play02:02

possible and that's what led us to

play02:03

explore the platform cache trying to eek

play02:05

out every little bit of performance we

play02:08

can so what is the lightning platform

play02:11

cache it provides faster performance it

play02:13

is a cach layer really the idea is there

play02:16

you can put this data in the cache and

play02:17

then when you retrieve it it's going to

play02:18

be faster than if you were to do it

play02:20

through traditional methods so um for

play02:23

our scenario that first bullet there

play02:25

complex data that's time and memory

play02:27

intensive to compile so for our scenario

play02:29

we running about 25 socle queries and

play02:33

compiling a very large Json structure

play02:36

that was going to be returned to the

play02:37

nend user um so some other scenarios

play02:41

could be data retrieve from an external

play02:42

system so maybe you have an external

play02:44

Billing System so they're reaching your

play02:46

API you have to call it to another API

play02:48

and get that data there's a lot of

play02:49

latency involved there if you can get

play02:51

retrieve that data and put it in the

play02:53

cash it's going to be able to return it

play02:55

a lot faster and really just identifying

play02:57

any frequently used scenarios where

play03:01

you're running the same process over and

play03:03

over it's kind of memory intensive like

play03:05

I've been saying um that's all right for

play03:07

utilizing the platform cache I think the

play03:09

important consideration there is when

play03:11

it's cach it's not necessarily live data

play03:13

right so that's an important trade off

play03:16

is performance versus how often is that

play03:18

data changing how important is it that

play03:20

it's the absolute latest

play03:22

data all right so kind of just you've

play03:26

gone through your process figured out oh

play03:27

you know I found an area where I might

play03:28

want to utilize the platform form cache

play03:31

what are my next steps so the first

play03:33

thing you want to decide is which type

play03:34

of cach makes sense for your use case it

play03:36

could either be the session cache the

play03:38

order cache or some combination of both

play03:41

uh the session cache is really user

play03:43

specific data it's only going to be

play03:44

served for a specific user so think like

play03:47

user record information like time zone

play03:50

preferred currency stuff like that and

play03:53

the order cache is really data that's

play03:55

going to be applicable across many users

play03:57

so for our use case that's going to be

play03:59

they're out in the field they're doing

play04:01

things like home health uh Home Health

play04:04

assessments safety inspection forms like

play04:07

those forms and that type of data is

play04:09

going to be accessible for a broad set

play04:11

of users right it's not for a specific

play04:13

user they may be on a different site or

play04:16

capturing different data but the form

play04:18

itself isn't changing so that was a

play04:20

perfect example for us to utilize the

play04:21

orache um one other thing I want to

play04:23

point out here is that the time to live

play04:26

it's often abbreviated like TTL and some

play04:28

of the documentation that's really just

play04:30

the lifespan of that item in the cash so

play04:33

for the session cach that's about up to

play04:35

8 hours or whenever the session expires

play04:37

and for the orach you can configure that

play04:39

as well but it can be up to a maximum of

play04:42

48

play04:43

Hours what are some considerations if

play04:46

for using the cache one is be safe I

play04:49

think that's a pretty good development

play04:51

practice in general maybe there's some

play04:52

like Cowboys who don't believe in that

play04:54

and just go straight to Pride but um

play04:57

what I'm talking about specifically here

play04:58

are there's no guarantee against data

play05:01

loss from the cach right it's not a

play05:02

permanent storage solution so don't put

play05:04

data in the cach like oh it's good to go

play05:06

it's going to be there it's serve up

play05:07

quicker but like we just said caches

play05:09

expire that data isn't permanent you

play05:11

need a persistent storage solution for

play05:12

where you're storing your data um

play05:15

optimize usage this is a tip from some

play05:17

of the Salesforce documentation but

play05:18

basically saying it's better to store a

play05:21

few small items than putting a whole lot

play05:23

of small sorry it's better to store um

play05:27

just few large items than a whole bunch

play05:28

of small items

play05:30

and then the other call out I want to

play05:31

make here is some of the default limits

play05:33

that are available for different or

play05:34

types um the most interesting one for me

play05:37

was the app exchange apps and getting 3

play05:39

megabytes of Provider free capacity for

play05:42

us that unlocked a lot of functionality

play05:45

because we're an app exchange app and

play05:46

going into a customer's or you're not

play05:48

sure what org type they're going to have

play05:50

how much dat how much of the cash could

play05:52

be utilized by other app exchange apps

play05:54

or custom code in their org so being

play05:56

able to be installed in a subscriber org

play05:58

and getting three me gab of capacity we

play06:01

know that we have at least that much um

play06:03

and we can kind of control that

play06:04

ourselves and if they want to augment

play06:05

that with more even

play06:08

better so I want to walk through some of

play06:10

the visualizations and functionality

play06:12

that's available inside Salesforce for

play06:14

managing and uh exp diagnostic data for

play06:17

utilizing the platform cache you see

play06:20

here this is an example of a cache

play06:22

that's being utilized you can see here

play06:24

there's some of this dashboard

play06:25

information for how much of the cach is

play06:27

being utilized what's the break Down For

play06:30

What typee of uh cash you're using here

play06:32

it's all dedicated to that app exchange

play06:34

free capacity and then partition

play06:36

allocations you can set up different

play06:38

partitions for the cach for different

play06:39

use cases right so if you have 10

play06:41

megabytes you could dedicate two to a

play06:43

specific use case and a specific

play06:45

partition dedicate eight to a different

play06:46

use case or however you want to break it

play06:49

down so if you were to click into a

play06:51

specific partition this is what you

play06:52

would see in this scenario we only have

play06:54

one partition so it's utilizing all of

play06:56

it and then this what again is our

play06:58

metric for how much of the orcat we're

play06:59

currently

play07:01

using this one is very a little crazy

play07:03

looking I know but I just want to show

play07:05

that this visualization is available

play07:07

basically a pie chart breakdown of how

play07:09

your cache is being utilized I think the

play07:11

next screen is going to be a little bit

play07:13

easier um in the UI once you scroll down

play07:15

this is what you're going to see this is

play07:17

really a breakdown of what's currently

play07:19

in your cache like on a record by record

play07:21

basis so uh what the way you work with

play07:24

the cach you basically put a key it's a

play07:26

key value pair like an apex map however

play07:28

you want to think of it

play07:29

you put a record in there you're going

play07:31

to access it by that key and here you

play07:34

can see the specific records that have

play07:35

been added to this cache the name space

play07:37

that they exist in whether it's the

play07:39

local name space or a package name space

play07:42

uh the size so this is the size as it

play07:44

exists in the cache so there is a

play07:46

compression algorithm going on I don't

play07:48

know exactly what it is but it does seem

play07:50

to chunk stuff down to like pretty small

play07:52

sizes so it's probably be significantly

play07:54

smaller than you might expect and then

play07:56

there's some other metrics here too so

play07:58

the last time it was access the access

play08:00

count you know how many times it's been

play08:02

touched so this is all really useful

play08:04

data to figure out how your cach is

play08:06

being utilized you can make changes to

play08:08

your code based on you know oh I thought

play08:09

people were going to be hitting this all

play08:10

the time they're really not you can make

play08:12

tweaks and adjustments there you can

play08:14

also access a good amount of this type

play08:16

of data um some metrics like

play08:18

programmatically as well through like

play08:20

Anonymous Apex or if you were to set up

play08:22

a custom dashboard you can do that as

play08:23

well um the other thing I want to point

play08:25

out here is that you also have the

play08:27

ability to delete individual like Purge

play08:30

individual records from the cache here

play08:32

on the the side column there you also

play08:35

have the ability on the main screen to

play08:36

just refresh the entire cach if you just

play08:38

like hey I know something significant is

play08:39

changed I don't want anyone getting

play08:40

stale data you can come in there and

play08:42

manually push all that data

play08:44

out all right so let's talk a little bit

play08:47

about this Benchmark scenario that I ran

play08:50

so this was with our product code and

play08:53

this is really just an intense use case

play08:55

where I'm a SE I'm a safety Tech I'm

play08:58

getting ready to go out in the field I

play08:59

have to sync 350 templates 70 forms and

play09:03

I need all this data available offline

play09:05

keeping in mind that some of these

play09:06

templates can be over a Meg of Json so

play09:09

it really is I wanted to kind of push it

play09:11

to the limits to see get some really

play09:13

good data for how performance was going

play09:15

to change um and then one thing that my

play09:19

my preference for like this type of data

play09:20

is I really want it to be production

play09:22

code not just like demo code that's kind

play09:24

of contrived or isn't a really good real

play09:26

world test so I ran this using our uh

play09:29

actual product code and I'm going to

play09:31

walk through kind of a strip down

play09:32

version of what that looks like in just

play09:34

a few minutes

play09:35

here all right so let me hop into that

play09:38

real

play09:39

quick so if you don't know a lot of Apex

play09:42

that's okay I'm going to walk through

play09:43

this a little bit and then a few minutes

play09:44

later I'm going to show you a better

play09:45

kind of starting point but I wanted to

play09:48

work walk through a really kind of

play09:49

stripped down example of the code that

play09:51

we're running in order to capture some

play09:52

of this Benchmark data so you're going

play09:55

to have in our scenario a rest endpoint

play09:57

set up it's been calling this helper

play09:58

class which is going to query some data

play10:01

in our scenario it's templates but it

play10:03

could easily be work orders accounts

play10:05

whatever type of data you have living in

play10:07

the cache um one best practice that

play10:11

we've kind of learned over the years is

play10:12

to utilize a lot of feature Flags

play10:15

because you may go into a customer's or

play10:17

like oh I'm sure I know how they're

play10:18

going to use it and you're completely

play10:20

wrong and they want to use it completely

play10:21

differently or they want to tweak Things

play10:22

based on their specific use case so uh

play10:26

we utilize feature Flags one of them

play10:27

here is really just a custom setting

play10:29

that's defining whether they want to

play10:30

utilize the cach at all so we're making

play10:33

this feature available they may say hey

play10:34

I don't want any chance of like stale

play10:36

data being served my customers you know

play10:39

they're all on good connections or we're

play10:40

not sinking a lot of data we don't want

play10:42

to utilize it at all it's really just a

play10:44

checkbox that then come in and turn it

play10:45

on or off um even if you're not working

play10:48

with like a package it still could be

play10:50

really useful as kind of a kill switch

play10:51

so if you were to roll something out

play10:53

production things are going a little Ary

play10:55

you could just log in uncheck it and it

play10:58

was just you could write your code to

play10:59

kind of skip over the cash and just not

play11:01

utilize it at all um so that's what

play11:03

we're checking here with this Buon we're

play11:04

saying is that set should we be

play11:06

utilizing the cash and the other

play11:08

customization we give users is the

play11:11

ability to define the lifespan for items

play11:14

in the cash so once again this could be

play11:16

like a customer by customer

play11:17

configuration where um maybe some are

play11:20

only updating their templates every

play11:22

couple days maybe some are hitting all

play11:24

the time they can determine that

play11:26

lifespan and we're going to respect that

play11:29

um this is really just a map of the

play11:31

structure that we're going to return to

play11:32

the end user um this is a we're going to

play11:35

kind of keep track of the templates that

play11:37

we don't find in the cash so I think

play11:38

this is a really important part of the

play11:40

design pattern is there can be cash

play11:42

misses there can be data you expect to

play11:43

be in the cash that isn't you should

play11:45

have a fallback plan set up so that uh

play11:48

if it's not in the cach that you're

play11:49

still serving this data

play11:52

up so we're going to come in here so

play11:54

we've determined if we should use the

play11:56

cach if we have we're going to go ahead

play11:58

and access our partition that's what

play12:00

we're doing here going to do cache. org.

play12:03

partition based on our namespace and

play12:05

retrieve that partition then going to

play12:07

iterate through all the IDS that we want

play12:09

so once again this could be templates it

play12:10

could be accounts whatever you want

play12:12

we're going to attempt to access that

play12:14

item and then we're going to determine

play12:16

if we found it or not so in this

play12:18

scenario we're just checking if it's not

play12:19

null if it is then we found something in

play12:21

the cache we can go ahead in this

play12:23

scenario I'm just deserializing some

play12:25

Json casting into the structure that I

play12:27

want and then put it in my map to return

play12:29

to the end user basically the gist of it

play12:31

is we found it we're going to put it in

play12:33

our structure and we're going to this

play12:34

one's good to go back to the the API

play12:37

recipient if it's not found we're going

play12:39

to keep track of those two and obviously

play12:41

we're not utilizing the cache then we

play12:43

didn't find any in the cach so if we

play12:45

didn't we're going to basically do what

play12:46

we would assume your code would do

play12:48

normally right is like this can be like

play12:50

a helper method just go do all the grunt

play12:51

work all your queries compile the

play12:53

structures put them all together um

play12:56

that's what we're kind of abstracting

play12:57

away there we're we're going to go ahead

play12:59

and put in our data to

play13:00

return and then uh if we're going to go

play13:04

ahead and retrieve this data we've done

play13:06

all the grunt work it wasn't previously

play13:07

in the cach if the cach is enabled let's

play13:09

put it in there right so that time if

play13:11

this person syncs again or if the next

play13:12

person's coming online they need to sync

play13:14

that same data it's going to be in the

play13:16

cach and good to go so that's what we're

play13:18

doing with this statement it's just like

play13:19

a map you're just going to do a a put

play13:21

statement in this scenario I'm

play13:22

serializing it and then this last

play13:24

perimeter here is just our lifespan for

play13:26

how long we want this to persist in the

play13:28

cach um one other thing to be aware of

play13:31

is there are a couple specific

play13:32

exceptions that you may want to handle

play13:34

um you can decide how you want to handle

play13:36

this best for for your use case but um

play13:40

do a try catch and then like item size

play13:42

limit exceeded there's a whole list of

play13:43

them in the Apex documentation for other

play13:45

exceptions and decide how best you want

play13:47

to handle

play13:48

those and that's it so then we we found

play13:51

it was either in the cacher it wasn't if

play13:52

it wasn't we put it there and we're

play13:55

going to return that to the end

play13:57

user

play13:59

all right so I'm going to do a

play14:02

visualization this the end but I least

play14:03

want to show some of the raw data here

play14:05

so in this scenario I ran 12 requests

play14:09

basically for some of this data that's

play14:11

the the leftmost column the start column

play14:13

is the time that the endpoint received

play14:17

the request right so I'm trying to

play14:18

exclude all any like mobile latency or

play14:20

anything this is when the first request

play14:22

hit the server and then the next column

play14:25

is when the response was sent to the

play14:27

mobile app and the elas is the CPU time

play14:30

usage for like how long that took on the

play14:32

server side um there is a little bit of

play14:34

like our mobile app does a little bit of

play14:35

smart batching here based on like

play14:37

expected payload sizes that's why the

play14:39

first six I think kick off at the same

play14:41

time but um we do have some uh data down

play14:45

here at the bottom once again I'm going

play14:46

show a visualization this in a second

play14:47

but from start to end it was 17 seconds

play14:50

from when we received the first request

play14:52

to when the last response was set and

play14:54

then average CPU usage time of 9 seconds

play14:58

what what about with the platform cache

play15:00

so see here I ran the exact same tests

play15:02

um same similar data but the start to

play15:05

end time was 12 seconds the average CPU

play15:08

time was six so let me just show you a

play15:11

visualization of that real quick so you

play15:13

can see here that utilizing the cach it

play15:15

was at least this fast in some cases we

play15:17

saved significant amount of time with

play15:19

the cach going to go ahead and jump

play15:23

to the conclusion so from the end users

play15:27

perspective the full sync time was

play15:29

reduced by 5 Seconds from 177 to 12 like

play15:31

I said we're moving a lot of data back

play15:33

and forth across the mobile connection

play15:35

but you can imagine if you're a tech

play15:36

you're at your office you're at your

play15:38

house you're ready to go out for the day

play15:39

you push that button you're waiting for

play15:40

that green light to come through that

play15:42

you have all your data every second

play15:44

counts so it was a pretty significant

play15:46

Improvement I feel like for um not very

play15:49

complex code right it didn't go back and

play15:51

completely rewrite stuff I'm just

play15:52

basically trying to access in the cache

play15:54

if it's not I'm putting it there and if

play15:57

not I'm just going about my codee the

play15:58

way it normally would um the other thing

play16:01

I want to point out here is also per

play16:03

request we went from 25 queries down to

play16:06

seven and the only reason we still have

play16:07

that many is because there's some data

play16:09

that we're retrieving always in real

play16:11

time stuff like could be like patient

play16:13

medication data right like you don't

play16:14

want to go out there with stale data so

play16:15

we still have some data that we're

play16:17

retrieving in real time that we would

play16:18

never want to serve from the cash and

play16:20

the average CPU time was reduced by 3

play16:23

seconds as well so if you're familiar

play16:25

with like platform limits for like

play16:26

number of sole queries you know CPU

play16:29

usage too you're also buying yourself a

play16:31

little bit more Breathing Room depending

play16:32

on how close you are to hitting those

play16:33

limits

play16:36

already so like I said earlier I want to

play16:39

show just a simpler example like if

play16:40

you're just getting started want to try

play16:41

something out uh Salesforce has this

play16:44

cash Builder interface available and

play16:46

that's what this is an example of um

play16:48

it's really pretty straightforward this

play16:50

example is really similar to the example

play16:52

they have in their documentation I just

play16:53

wanted to kind of keep with the theme of

play16:55

retrieving template data just to show uh

play16:57

kind of how it could work Works slightly

play16:59

differently um so you can see down at

play17:01

the bottom this is the usage right so

play17:03

this is how you would retrieve data so

play17:05

using the cash class you're going to

play17:07

retrieve it you're telling it what type

play17:09

of data it is once again this could be

play17:11

anything this could be accounts could be

play17:12

work orders and whatever type of data

play17:14

you have in there and the way it works

play17:16

is the implementation above when you do

play17:19

that get to retrieve the record it's

play17:23

going to determine if it's in the cach

play17:25

if it is it's going to serve that up if

play17:27

it's not it's going to run this D load

play17:28

method you just tell it what you want to

play17:30

do in order to retrieve that data if

play17:31

it's not in the cache and it will

play17:33

automatically put it in the cach for you

play17:35

so once again like depending on how much

play17:37

like fine grain control you want over

play17:39

when you're serving the cash and what

play17:41

data is going into the cash and when

play17:43

this is a really great starting point

play17:44

and kind of uh abstracts that away and

play17:47

says let me handle if it's in the cash

play17:49

or not you just write the information

play17:50

tell me what you want to do if it's not

play17:52

in the cash and I'll handle where it

play17:54

comes

play17:55

from uh what are some additional

play17:57

considerations some some of these we

play17:58

already hit on a little bit but you

play18:01

consider when refreshing data what type

play18:04

of data is it how stale can it be

play18:06

consider using like I said like time to

play18:07

live for the cash or even Dynamic

play18:09

refresh options so like I said like

play18:11

programmatically you go Apex say hey

play18:13

something changed and using the trigger

play18:15

you go hey I'm going to purge this

play18:16

record from the cache um there's a lot

play18:17

of functionality available there for

play18:19

dynamically handling that and the other

play18:21

one is like I said like handling cach

play18:23

misses so if the data isn't in the cach

play18:26

make sure your code is set up to handle

play18:28

that

play18:30

scenario a couple other ones like I

play18:31

mentioned earlier the cach DAT is

play18:33

compressed even though it may not seem

play18:34

like a whole lot of space it might be

play18:35

more than you're expecting um cash

play18:38

visibility is really if you are in a

play18:40

managed package you have the ability to

play18:42

set if that cash can be accessed by code

play18:44

outside of your package or not so if

play18:47

you're in there with custom code maybe

play18:48

they have code that's going to interact

play18:50

with the cache as well you can allow

play18:52

that functionality and also cache

play18:55

eviction and least recently used

play18:57

algorithm it's basically how it

play18:58

determines if you're at capacity and

play18:59

your cash is full you go to put

play19:01

something in there what happens um

play19:03

basically the least recently used item

play19:05

is the one that's pushed out in order to

play19:06

make room for new uh cash

play19:10

values all right so a couple helpful

play19:13

links here one is I definitely utilized

play19:16

Apex developers guide a lot Trailhead

play19:18

was also super helpful um a lot of good

play19:21

data there to help get you started with

play19:22

the platform

play19:24

cache all right well thank you so much

play19:27

hopefully this was help for everyone got

play19:29

a good chance to get a good

play19:30

understanding of the platform cache and

play19:32

how it

play19:36

works

Rate This
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…

5.0 / 5 (0 votes)

Related Tags
API PerformanceSalesforcePlatform CacheCaching StrategyData OptimizationMobile ConnectivityField ServiceApp DevelopmentCode EfficiencyData Synchronization