HowTo-Using Macrometa as Global Edge Cache
Nearly all developers are familiar with Caching. Starting from CPU to browser to any web app - all software rely on caching to a certain extent to provide blazing fast response, decrease network costs, cloud offloading and improved availability during network partitions. The scenarios one can use cache are numerous like database speed up, manage spike in web/mobile apps, as session store, caching tokens, player profiles & leaders in gaming, web page caching etc.
Obviously there are a vast number of caching solutions available in the market to address these. But that does not mean using any technology solves your problem. For example, there are many types of cache data access strategies like read thru/lazy loading caches, aside caches, write-thru caches, write-behind caches, refresh ahead caching etc. Each of these strategies makes perfect sense in some scenarios but not so much in other scenarios.
When it comes to internet scale web applications, five characteristics become critical in any caching solution i.e., high performance, manageability, scalability, availability & affordability.
Macrometa Global Data Network (GDN) provides above characteristics and lets use reuse same platform for both as a geo-distribute edge cache as well as a database. Another alternative is to use cache solutions like memcached, redis etc but then you are left with doing significant lifting to get the characteristics like geo-distributed clustering & edge caching, persistence to support hot datasets larger then memory etc.
Below is a quick code sample of how you can build a geo-distributed edge cache using Macrometa GDN to use in their applications. The code sample is in python but you can do the same in any language using our other language drivers or via REST api.
Let’s define our cache class as below. This basically allows you to leverage Macrometa GDN as a geo-distributed edge cache.
class Cache: def __init__(self, fabric): self.fabric = fabric if fabric.has_collection('cache'): self.cache = fabric.collection('cache') else: self.cache = fabric.create_collection('cache') def get(self, key): return self.cache.get(key) def set(self, key, document, ttl=0): if self.cache.has(key): self.cache.replace(document) else: self.cache.insert(document) return True def purge(self, keys): for key in keys: self.cache.delete(key) return True
The rest of the sample program is pretty much on how you can utilize the above Cache class. The sample does the following (which correspond to step numbers you see in the logs of the interactive sample provided below):
- On first access to data by the sample app, there is a cache miss i.e., data is not the local edge get. So the app gets data from the origin database and also populate the GDN edge cache in local region.
- On 2nd time access to data, the app is served data directly from the closest GDN edge cache.
- Say another instance of the application access same data from a different region like say Europe. The data is served directly from the closest GDN edge cache to that application instance. This is because any data that an application puts into GDN is transparently & automatically geo-replicated globally.
- For writes from the application in any region, the same updates the origin database and also updates its closest GDN edge cache. This in essence acts like automatic purge & replacement.
- For any future access to updated data by the application, the data is served directly from the closest GDN edge location.