How We Scaled It: Facebook's Online Data Infrastructure to 1B+ Users

Rockset founders Venkat Venkataramani and Dhruba Borthakur share their experience scaling Facebook’s online data infrastructure to 1B+ users. Central to the discussion is the theme of moving fast without breaking things. Thousands of new product features from like buttons to groups launched as Facebook rapidly scaled user growth. To keep Facebook always online, the team adopted a services-oriented architecture and built their own technologies including RocksDB, Scuba and Tao. Learn about how Facebook scaled, trends in the real-time data space and what key considerations should be made when scaling your own data infrastructure.


Speakers

Venkat Venkataramani is the CEO and co-founder of Rockset. He was previously an Engineering Director in the Facebook infrastructure team responsible for all online data services that stored and served Facebook user data. Collectively, these systems worked across 5 geographies and and served more than 5 billion queries a second. Prior to Facebook, Venkat worked on the Oracle Database.
Dhruba Borthakur is the CTO and co-founder of Rockset. He was an engineer on the database team at Facebook, where he was the founding engineer of the RocksDB data store. Earlier at Yahoo, he was one of the founding engineers of the Hadoop Distributed File System.

Show Notes

Kevin:

Hello everyone, and welcome. I'm joined today by our founders here at Rockset, Venkat and Dhruba. So my name is Kevin, and we'll be having a chat today with both Venkat and Dhruba. They bring with them lots of great experience from Facebook, scaling Facebook to a billion plus users, that's what I understand. But we'll hear more about that in a bit as they share later in this session. But first, rather than have me introduce them, it would be great to hear them, tell us a bit more about their background. So, let's start with Venkat. Tell us a bit more about what you did before Facebook, your interest and experience, and so on.

Venkat:

Thanks, Kevin. Prior to Rockset I was running online data infrastructure at Facebook. And prior to Facebook, I was building databases at Oracle, in the early 2000s. At Facebook, I started around 2007, sometime in 2007 and was there for eight years. I was managing all the teams responsible for online data management across all Facebook products and the backend that was storing and serving all the Facebook user data. Be part of the forces that helped scale Facebook from 30 million monthly active or so in 2007 to a billion and a half by the time I left in 2015. Worked on a lot of hard problems with amazing people and learnt a lot and very thankful for that experience.

Kevin:

Awesome. And then Dhruba?

Dhruba:

Yeah.

Kevin:

Yeah, you were already working on what is now known as big data technology even prior to Facebook. So tell me more about that experience prior to Facebook and then also how you got to Facebook.

Dhruba:

Sure. Yeah. Yeah, I spent a lot of time building a lot of these backend technologies for data processing. My first experience was with the Hadoop File System, which is what I started to build when I was at Yahoo. This was 2006 or something. The challenge there was to obviously process a lot of data using Hadoop. So I built a lot of core pieces of Hadoop, 0.1 release, which was kind of a infant Hadoop. And then when I moved to Facebook, which I think is where you're going to ask me probably most of the questions today. So when I moved to Facebook, I started off with the Hadoop team there at Facebook. We built a Hadoop cluster, which was a 20 node Hadoop cluster at the time. Did a lot of processing, but then over time I moved on to more real time systems inside Facebook and built this open source database called RocksDB, which is mostly for online data serving for large databases. I can tell you more about some of these internals, but yeah, that's kind of my basic introduction. Building a lot of these backend data technologists.

Kevin:

Sounds good. I'm sure we'll get there to RocksDB and such. So, did you know each other prior to Facebook or you met Facebook?

Venkat:

We met at Facebook. I remember we had a common friend that I'd worked with Dhruba previously. I think the first week or the first day Dhruba started, he made sure both of us met each other. We were not on the same team, but it's [Mark White Gosky 00:03:18], he had worked with Dhruba in a prior job in a prior company and I still remember Mark pulling us apart, I don't know what he thought. Maybe he saw the future or something, but he made sure we both met on the first day of the first week Dhruba was there.

Dhruba:

Yeah. This was a time when Facebook was very small. So the whole engineering team was probably like 50 people or something like that. I mean the core engineering team. The sort of one more common thread is that both me and Venkat graduated from University of Wisconsin, Madison. So we have some common process and other places, but I never met him before I joined Facebook.

Kevin:

Yeah. Great story. I'm glad it turned out and worked out that way. All right. So, Venkat, about 50 in the engineering team when you joined, I think you mentioned it was about 30 million users, at a time you joined, Facebook list. Maybe tell us a little bit more about the online data infrastructure at the time you joined. So this is before picture. What was that like?

Venkat:

Before picture, when I started, Facebook's entire stack was built off of more or less off the shelf opensource like amazing technologies, [MKHD 00:04:35], MySQL, they really helped carry the scaling demands of Facebook. And I would say through the course of those eight years, it was literally a path towards going away from taking open source software and customizing it to our needs, to actually building infrastructure by Facebook for Facebook. So, that was really what was needed to help scale Facebook through those years. And a lot of people would ask, "Oh, you scaled through like so much user growth and it would have been a difficult challenge." I always tell them the one thing that we could always model and take for granted was the user growth. So it was actually the easiest part of scaling on online data backend.

Venkat:

I think the hardest part was always the product launches because before and after the like button in a matter of one week, the backend workloads have dramatically changed. And you have probably every week there was a story like that or every month, there is a story like that, of a new product launch. The ability diverse and being so versatile and being able to scale while fostering innovation, and so your product engineers are kind of like bottlenecked on their creativity and not what the data infrastructure can do for them was really the bigger challenge. It was not really about just scaling user growth. Yes, that also had its own challenges, but those were well understood. But creating an environment where you can accelerate product development and move really fast was the biggest challenge while making sure that the site number goes down. Right?

Kevin:

Yeah.

Venkat:

I think that's what I would say was very, very good about that experience.

Kevin:

For sure. Okay. So you mentioned 30 million to about 1.5 billion, but user growth was kind of a given and then challenges around new product launches, like button and similar features. What are some of your favorite war stories from those? Do you have any to share with us?

Venkat:

Oh, wow. How much time you have? I'm counting scars, I think our entire time will be up. No, I think there's too many to really tell, but I think particular launches always stay with you. I would say one of the highlights for me was when Facebook launched to groups, which is something you take for granted now. There was a Facebook when there were no groups and you just had friends and friend list. The groups product was one of those things that was a fully featured product with lots of advanced features and functionality that was built entirely on top of the set of obstructions that we had built for the product team. It was such a big product launch but it was just yet another day at work for everybody.

Venkat:

I think it was very impactful because a product team could innovate and could actually build a very, very compelling, useful product, while taking the infrastructure for granted, and everything just worked scaled really, really well to the launch and even many, many years after the launch. I would say that is one of the highlights that is always what we aspire to, that the product teams are bottlenecked on their creativity and not by infrastructure. And when we saw that come to fruition, it was definitely a highlight.

Kevin:

Okay. And I think one thing you mentioned earlier was the infrastructure you had kind of evolved from open source tools to building your own. We'd like to hear more about what some of the things you did as a team were on this front and then get your perspective first Venkat, and then maybe you can talk about Dhruba, some of the things that you worked on in terms of building technologies at Facebook.

Venkat:

For sure. I'm actually more curious, I'll definitely let Dhruba talk a lot about, because I think at this point in time, by the time at least I left Facebook online data infrastructure on RocksDB were kind of like intertwined with each other. You couldn't really think about one without the other. So, I think Dhruba's perspective would be super valuable. But I would say I think the biggest set of innovations we had to do was about scaling Facebook from a few clusters in one data center in one region, to by the time I left, I think we had maybe five or six regions and many different clusters within the region. I think the biggest, I would say, achievement of that is like all of those kind of big re-architectures and engineering was happening while the site was always up and running.

Venkat:

And I would say the highlight of that whole eight years for me is that I think there were maybe only one or two really, really bad major incidents where all the Facebook was down for everybody. One of the guiding principles for our design was always, "Some part of Facebook can be down for everybody, all of us can be down for a small percentage of the people, but all of Facebook should never be down for everybody." Carrying through all these enormous scaling challenges and going from one region to multiple regions and multiple clusters and going from an open source stack to a more or less a proprietary stack that was built customized optimized for those workloads.

Venkat:

Doing all of that without major downtimes and fail whales, I think is probably the single biggest thing that I'm most proud of. By no means I'm trying to say what we did was the only thing needed. I think there were a lot of teams, a lot of amazing people working together to make that happen. But whatever the team that I was responsible for did was definitely necessary for Facebook to achieve that. But I'm very curious to hear what Dhruba has to say because I think he also saw almost all of that during the time.

Dhruba:

Yeah. And like Venkat said, we saw a lot of growth when we were in Facebook. But before Facebook, I was at Yahoo. And so when I moved to Facebook, I was building a lot of Yahoo File System and Hadoop File System and some of these Hadoop backends. And so when I moved to Facebook, the first thing that I do remember is that at Facebook, we were running some daily reporting. And that used to take almost a day because it was running on Oracle. And then we put in like a 20 node Hadoop cluster, and then the whole thing was done in like 25 minutes or something. So, that's the time when people actually figured out that, "Oh, look, decision-making based on some of these reporting things would really help if we can scale out these computations using Hadoop." So, this was all batch computation at the time. A lot of indexing was happening, but it's all in batches.

Dhruba:

Then over time, what I saw is that users at Facebook started to grow a lot. And then things like spam detection, you can't afford to wait for one hour before you can find that, "Hey, somebody posted a bad URL to the Facebook for sharing." There were a lot of requirements saying that, "Hey, how can we scale these things so that we can find spam and detect spam immediately as soon as somebody produces a spam?" So this is the time when I started to think more about how can I use Hadoop to do some of these things, and then quickly realized that Hadoop is great for storing a lot of data, but it's not great to extract the insights within a few seconds of when the data was produced. This is what I think kind of triggered us to do more of online real-time decision making software.

Dhruba:

And one of the core pieces that we built was RocksDB. I remember coming Venkat because he was my manager at the time. He said that, "Yeah. Let's try this and see how it goes." RocksDB was all the core pieces. It was used to index a lot of these big social graph. The reason we index there is because that's the only way to kind of gather or make decisions on large data sets quickly. You cannot afford to do this paralyzed and scam like Hadoop used to do, or any other warehouses that you might be using. They all are used to paralyzing scanning and giving you results. So when your data sets become very large, either you have to spend a lot of money scanning everything for every query, or you have to go to an indexing strategy. So we picked the indexing strategy when we were on Facebook. We said that it will build a great indexing system, and that's how RocksDB started. And it could index a lot of data and give you a quick insights. Some of them were ad systems. Oh, there's another story that I can share here.

Dhruba:

One of the first use cases was an ad backend. So they used to run a 500 node HBase cluster, which is again for fast real-time querying on these large datasets. And then when they moved to RocksDB, that 500 node HBase cluster got reduced to a 32 node RocksDB cluster. So this is when people in Facebook figured out, mostly engineers. They can look at all these metrics and then to figure out, "Oh, this is..." RocksDB is really suitable for large-scale indexing. Nothing wrong with Hadoop or HBase, those are good systems as well. But the focus, if you want to extract quick information out of your data set is when you need things like RocksDB. So this is where a lot of RocksDB things became popular. Maybe Venkat, do you want to share anything about how RocksDB got into the online data serving model? I talk more about how RocksDB got into the decision making processes of Facebook, but what about online data serving?

Venkat:

So newsfeed is a great example. I think you're really talking about the arc that probably Facebook was one of the earliest companies that saw that, which was going from batch to real-time and how it bandage is. And I don't mean just real time kind of business operation decisions. I'm talking about the product working in real-time. Newsfeed is a great example. It is actually a real-time indexing system, if you think about it, behind the covers. But it was not built like that.

Venkat:

Back in 2006, 2007, when it was built, it was a bad system. I was part of the database team that used to be always in a world of pain when these batch jobs will come look at what everybody did and try to kind of like build newsfeed for every user. And then the workload is just getting larger and larger every day as was on completely unscalable track. And we had to completely flip it to instead of doing kind of these huge fan outs. When a celebrity creates a post and making 5,000 copies of that, we completely flipped it around and forced to build special purpose indexes that would actually just index everybody's story and keep it online, keep it live and build an architecture that can very quickly go and find out, give me all the activities and the stories that is involving all of my friends. Quickly get a candidate set, rank it using the latest ML models that they're AB testing and then figured out what is the most interesting thing for the given user.

Venkat:

Once we flip it around, then suddenly, not only that it was massively scalable, but you could also iterate on it very, very quickly. The second phase of that was actually going from a custom special purpose built newsfeed backend to actually just running it on top of RocksDB. That was a very big project that Dhruba was part of, I vividly remember. Where they kind of like gave away all the custom special built optimizations and kind of used what is considered a more a general purpose embedded storage engine like RocksDB, and you would think going from special purpose to generate actually makes things less efficient, but RocksDB was pretty good, and had done a lot of innovation behind the scenes that it was actually more efficient than even what they had built previously. I think that was a great success story to kind of drop that. And it's kind of like Facebook were going through the same paths that a lot of enterprises now we see are going through and they were just five, six years ahead of everybody else.

Kevin:

That's awesome. That's a great story of, I guess, innovation being driven by your requirements to move from batch to real-time. I understand RocksDB is now being used in many places outside of Facebook. And Kafka and some of the databases as well, like the Cockroach and so on. I'm curious, Dhruba. You decided or your team decided to create RocksDB. Were there some things that existing databases could not do that caused you to want to create RocksDB?

Dhruba:

Yeah, no, absolutely. That's a good question. A lot of the databases that we were using earlier were all B3 based databases. Which means that it's like a tree where when you update, you have to read some blocks on the disc and then update it and then write it back on the disc. So far, RocksDB, it's different. It's not a B3 based database. It's an Alison based database, which is a lot structured merge tree based, which means that a new writes always go to new places in the store system. And then there's a background process, which takes all these rights and does kind of a compaction, which eliminates all the fluff that you might have accumulated over time. Very different from bee trees in general. But Alison databases are the ones which are ideal for scaling to large datasets, especially when you are doing updates to this database.

Dhruba:

When you have read only data, you can use any warehouse, you don't need Alison. You can keep storing new partitions in your warehouse and everything is good, but when you need a mutable database on large-scale, Alison storage engine is the right choice. Because new rights go to new places automatically and the database handles instead of you as an user managing partitions and segments of your data set. Take for example, if you use some a warehouse, most likely you'll create a daily partition and you put data there, Whereas if you use RocksDB, you could override existing data, and the system takes care of managing these things and updating and assorting them together so that you can get good performance out of the system.

Dhruba:

So, yeah. I mean, the Alison engine was definitely something that we made it quite popular, I feel, in the industry in general, as part of the RocksDB database. There are other Alison engines before RocksDB. We looked at a lot of code that LevelDB had written, which is from Google. And we saw that there are great advantages of LevelDB, but there are some certain things that are missing, especially that code never actually got to run on server size databases. And that code was mostly for like chromium browser, which is a very small set, but the concept is what we borrowed from LevelDB saying that, "Hey, we can build Alison databases for large-scale indexing data." So, that's where the Alison thing became popular.

Dhruba:

The other thing that really helped us, I think is that we could move these Alison engines or the new databases piecemeal into the Facebook backend. It is not like we replace the entire Facebook background with RocksDB one day. Because Facebook had a great services architecture. You could piecemeal it, you even have an ad services. I have a spam detection service. I have some growth service. All of these services are separate from one another. So, it made us move fast really quickly as a developer. I could migrate one service to one database and keep the other services as it is. Maybe Venkat can mention how we kind of did this. But the agility was the real key there for us to be able to migrate to new storage systems or new ways of processing data, you have anything to add? Anything?

Venkat:

Absolutely. I think RocksDB was absolutely necessary for Facebook to transition from hard drives to SSDs, number one. Without that we couldn't have done what we did, number one. Number two, as Dhruba has pointed out, mutability is very important. SSDs is very hard to bet your entire infrastructure on a new hardware technology like that without the assistance of software, that RocksDB gave you. I would say the third thing that is often not talked about in Alison engines that Facebook benefited massively was that it compresses better. RocksDB's compression technologies were so good that when circa 2016, I think, it was a year after I left Facebook is when actually it went live, but the project was kicked off a year before I left.

Venkat:

When all of Facebook's MySQL backend transitioned from running on InnoDB, which is the storage engine of choice for MySQL databases, to MySQL on RocksDB, compared to compressing InnoDB and then compression on RocksDB. RocksDB was still able to reduce the data footprint by more than 50%. That is just at Facebook scale, the amount flash that Facebook had to buy, literally went to half overnight. It's a one-year project, but that I think is just massive, massive impact. It not only works on flash drives better, it also compresses better, and so there's just massive advantageous because of that. I think Dhruba also mentioned on all these service oriented architecture.

Venkat:

Facebook invented Thrift so that they can build services. This was before GRPC existed. And now these days, it even happens at a level of the GraphQL level where there are a lot of backend services and you need this glue layer that pulls all of these different backend APIs to a single unified GraphQL endpoint. But a version of that existed in the early days of Facebook because service-oriented architecture or microservices was the only way we could move fast, we could innovate.

Venkat:

All of these services, whether it's newsfeed, the chat or the messenger services, spam fighting services, and there were literally hundreds and hundreds of them. I'm not exaggerating the numbers. I think probably if I were to count them one after the other, maybe there were like 2000 of them in 2015, and maybe there is, I don't know, 20,000 of them now. All of those industrial services really helped set an agenda, set a road map and move really fast in a way, without having to coordinate their deployment and activities with every other team in the company. And that was really, really important for Facebook to be able to move faster and in a way at the pace that they did.

Kevin:

It's interesting, you brought up the Facebook mantra of "move fast" alongside microservices. I'm curious, what's your advice to development teams nowadays looking at microservices. Would you say that's a best practice, is that the way to go for someone implementing like an online service?

Venkat:

I think it really depends. I think if you build microservices for the sake of building microservices, you're probably doing it wrong. I think there it is almost a software engineering exercise as much as a systems architecture exercise. When your team grows beyond a particular size, these things become even more important, not just your infrastructure across a particular scale, from my perspective. But I think it's all about separation of concerns and performance isolation when you're scaling, If you do it right, it'll help you move faster. You will have a lot of these other services that scale completely independently and fail, even more importantly fail completely, independent of each other. Which means your product can isolate these things much better.

Venkat:

Imagine if every single product like newsfeed and chat and messenger were built on top of the core MySQL stack, Facebook would not be this highly available that I keep saying I'm very brought up. There's no way, if all of those things were coming and pounding the MySQL backend to render everybody's... to find all the interesting stories, to do graph search. We forgot about the search functionality at all. I mean, there's so many important services that were built that were actually real-time indexes. Off the social graph, but outside your core [OilTP 00:26:07] databases.

Venkat:

That was an extremely important part and that I think helped Facebook scale. And so microservices doesn't mean just that your logic is separated, but your data and your failure domains, and your teams and their deployment processes are all separated out, with a clear contract and how they all come together, you still present a cohesive product to the end user. But behind the scenes you really have an architecture that can kind of like independently evolve at scale. And that was really, really important. Without that, I think there is no way Facebook could have kept up with the innovation that they had all these years.

Kevin:

Right. And this is how some of the other Facebook technologies that I've read about, like [Tow 00:26:56] and Scuba, is that how those came about to kind of separate the wheat concerns? What's the problem that they were trying to solve and how did these come about?

Venkat:

Bigger question. Tow was really our answer to a better caching infrastructure that fronts MySQL, and that is scalable. And so I would say Tow helped us scale the online, kind of like the canonical authoritative system of record while preserving the consistency semantics and going from one data center, one region to many, many data centers and do that efficiently. So, that mandate was a very important challenge then and I think we worked very hard to make that work.

Venkat:

And just to give you some idea. By the time in 2015, this is the number that I remember when I left, these online Tow and family, that infrastructure was serving 5 billion requests a second. The scale was really massive. Five nines read liability availability was not enough. We had to [Greek it 00:28:08] all the way, much closer to six nines for the product to take the infrastructure for granted. That was the scale we're talking about.

Venkat:

Scuba was actually very interesting side story. I think it was a lot more about our metrics and monitoring backend, getting real time. It was not really Facebook user data, but there's a lot of parallels in terms of the technology that we had to build. That going from kind of batch metrics and dashboards to Scuba was used to monitor the health of the site.

Venkat:

When I was part of the team that helped build that, and conceive it, and build it, it was really built to say there are many things wrong with the site and the health of the site, we need to know immediately. Not just that it's broken, but we need a system that can quickly give you interactive query and fast responses to interactive questions. So that we know where is the problem? Is it this database? Is it that region? Is it this cluster? Is it that rack switch? And is it this particular code that we just released? To be able to interrogate the data and in real time, in a matter of seconds detect and go and isolate and remediate that problem. Scuba was one of the most important tools I would say that really, really helped us keep that uptime and keep the [absorbability 00:29:36] story of Facebook right at the top.

Kevin:

Awesome. Those are great summary and overview of what you worked on, both of you. Venkat and Dhruba worked on while at Facebook, the technologies, the challenges. Thanks for sharing all this great information. And then, you guys moved on. I just want to get your perspective for what was in your mind, what motivated you to work on what is now Rockset. And maybe we'll go with Dhruba first, and then Venkat. What made you step out from all the great work at Facebook and RocksDB and come to Rockset?

Dhruba:

Yeah, a good question. I mean, there are many multiple reasons I think, but maybe I'll talk only about one or two reasons that I think are interesting. So there are a lot of backend systems that we built at Facebook, which I think is really useful to a lot of enterprises out there. Take for example, the focus that in my entire life at Facebook, we mostly focused on making systems more real-time, more interactive. I really wanted to see if there's a way for us to build some similar technologies and make it useful for a lot of enterprises that are out there. How can you make the decision making process more real time for a lot of these enterprises?

Dhruba:

I know decision making process could be things like, how to schedule your fleet management system. For example, how to schedule your transportation or your transport engines? How can you make sure that you are doing the right advertisement to your users at the right time instead of waiting for many hours or days? So that's one focus. But the other focus that I really wanted to see is to kind of build something on the cloud, because I feel that this is where a lot of enterprises are moving, and I really wanted to learn a lot of things in the process. So it's not about just adding value to enterprises but it's also about myself learning. Saying that, "Hey, how can I learn new technologies which I think would be really useful for people going in the future in the next 10 or 20 years." Because things in the infrastructure change slowly, but when it changes, it makes a lot of impact we users.

Dhruba:

When I started Rockset, me and Venkat, both were very much aligned saying that, "We need to build something that can add value to enterprises, but we're running it on only for the cloud." So that's how we started Rockset, saying that, "How can we build a cloud native system that can do a lot of indexing of your data sets and give you quick results, so that you benefit or the users benefit a lot by leveraging the Rockset software." Those were the two things which kind of really motivated me saying that, "Hey, it's great to be able to leverage some of my existing experience and build something new."

Kevin:

Cool. How about Venkat? What was your motivation for founding Rockset?

Venkat:

I think that the core motivation comes from the fact that building data driven applications, I used to say this, almost feels like you're building a Rube Goldberg machine of sorts. It's too complicated, you're duct taping, completely disparate pieces of technology to get anything done. It's very hard to build, very hard to upkeep, very hard to iterate. We saw how well it can be bailed and what impact it has to every enterprise. I would say one of the biggest competitive advantages that Facebook had over the competition was it can move fast. It can out-innovate anybody even if they are with a new feature six months behind, once they had enough users and distribution.

Venkat:

I think that is an extremely important advantage that we wanted to give and more and more AI applications, I think that is the only competitive advantage that you need. If you can iterate on your models faster than your competition, that's it, you're going to go in and take the market. So I think it's going to be very, very important to everybody in every product on how you can build real-time applications very, very quickly.

Venkat:

If you really look at the world of all the application development happening, the things that you can solve within a single node, a relational database is kind of like one size. If you're building an application that can be solved by a single node Postgres or single node MySQL, God bless you. I think you're in a very good hands, you should just stay within that and enjoy. Enjoy the glorious product that you are you're good to use.

Venkat:

But I think in the world of all applications that are trying to be built is probably like a hundred times that size, than what you can actually solve within a single node OLTP database, and that's the world we live in. And that world is super complicated and doing something about it and building the power that we had for Facebook and product engineers to move fast and quickly iterate, if you can think it, you can build it. If you can dream it, you can build it.

Venkat:

We want to bring that to everybody building data driven applications in the cloud. That was the real Genesis. That was the real motivation to be able to do what we did within the boundaries of Facebook, to the rest of the world. And everything was followed from there. RocksDB was a very important innovation that we carried forward. We really thought how the real-time indexing should work in the cloud. But all of that, if you come back to the arc, it's about empowering builders to be able to out innovate and move really, really fast. We're trying to build something useful and put it in the hands of as many people as possible. That's why we do what we do.

Kevin:

Wow. That's great. Both of you, now that you're Rockset, you've been speaking to a number of companies, organizations, just trying to build their own real time application. Do you have a sense what are the challenges people are facing? Now, are they similar to the ones you were facing at Facebook? Are they the same themes that you've been seeing or have there been changes since the time you left Facebook? Maybe Venkat, yeah.

Venkat:

I will say one thing and I'll let Dhruba talk more. I think right now, almost everybody that we are talking to, they're building these real time applications. They may not call it that way, but that's what they're doing. I think they're doing it by either what I call OLTP abuse or warehouse abuse. Because there isn't a really good tool, really good solution for that, really good product for that. And so they end up either abusing the warehouse, which is not really built for application development and paying through the compute costs, they're completely out of our control. Or they're abusing their OLTP system and really struggling because of that. And then there is a lot of complexity that get pushed over upstream into stream processing and other types of technologies that needs to co-exist with those kinds of serving layers.

Venkat:

Time and again, that's what we see. A tremendous amount of complexity which is probably the biggest reason why they aren't doing a lot of innovation. And I think using a product that is fully managed and very easy to scale, very easy to get started like Rockset is really all about that. And how quickly can you build, how quickly can you iterate, and how much you can take the massive scaling for granted. Dhruba, do you have anything to add?

Dhruba:

Yeah. I have one more thing to add there. So, one of the challenges that I see when I talk to many of our users and customers is that they find it really hard to find the right set of people to build these data processing backends. So right now a lot of enterprise they have to recruit people to be able to do these ETL Ling or transforming data or cleaning data before they can get insights out of it. So this is where Rockset also helps them because Rockset you don't need to do too much of cleaning or ETL data into Rockset. Rockset has automatic detection of data formats and auto schematization. Hopefully we'll be able to address this challenge where Rockset user doesn't have to find or find very difficult to find people, to be able to do this. We'll make the software do that instead of people doing that manually.

Kevin:

Great. Thanks for sharing. So we talked about challenges, what are some pieces of advice you might give someone? If I were setting up an online product, an online service today, what are some things I should be thinking about around the performance, scaling, reliability, application development? What advice would you give someone who is in this position today?

Venkat:

The biggest thing I would say is focus on what moves your business and use managed services in the cloud for everything else. I think you're living in the world of just this cloud era. I mean this whole phase is just about massive scalability and fully managed amazing services available. And we've gone from the infrastructure as a service era to platform as a service to even database products that resembles a SAAS, a lot more than a Paas even. Rockset is a good example, I'll just use it to contrast. We're not like RDS where, oh we can just run the Postgres database for you, but you still have to schema manage it, you still have to load data, to tune it, it optimize it. Rocks is out of the box. Automatically schema does sort your data. It out of the box indexes your data to get into good performance. I'm just contrasting using Rockset as an example.

Venkat:

The biggest thing I would say for all enterprises there, is that you can get everybody in your company really focused on how to move your business forward. How how to work on the things that differentiates you from the competition. How to build better quality products. How to run a more efficient business, make business operations real time, use the data that you have to be able to make those decisions real real-time. Build a superior product that your competition can not, by leveraging all the services and the real-time nature of the product, as opposed to building a massive data infrastructure teams, and what have you because you don't need that anymore.

Venkat:

I think we are living in an era where we can do that. Companies like Rockset and the likes can do it at a scale for a massive number of enterprises very, very efficiently while reliability and performance and all of that is something that is like a feature that you just get out of the box, as opposed to people and a lot of manual tuning and building that is required. So, that would be my biggest advice to say, build a team that focuses on moving your business forward while you take your infrastructure for granted.

Dhruba:

Yeah.

Kevin:

Excellent.

Dhruba:

It's a good point. Yeah. Maybe I can add one more thing.

Kevin:

Please.

Dhruba:

[crosstalk 00:41:40] When you were speaking, it felt to me that maybe I can draw an analogy. I can say that usually it's not the species which is the biggest species which survive or not the most intelligent species that will survive, but in general, it's the species that's the most adaptable is the one that survives in the longterm. Rockset I feel really helps in some of these adaptability things where you can run smaller sets, you can launch bigger sets, you can run different kinds of backends on Rockset. I mean, my takeaway for enterprises is that, yeah. If there's a way for you to innovate fast and be adaptable, I think that really captures how to out maneuver your competitors.

Kevin:

That makes a lot of sense. Okay. So, before we close off, one more for the both of you. Trends, predictions in the industry. Dhruba you've worked on a number of well-known data technologies now. You've seen technologies come and go. What are some trends you are looking at? What are you currently seeing in the data or application development space?

Dhruba:

Yeah. I see one or two trends and I'm going to talk... Maybe the trends that I see are close to the area that I work in. One or two trends I definitely see is that there's a lot of data processing moving to the cloud. A lot of processing but many enterprises. And then, the fact that many of these data processing tasks can get their job done very quickly by leveraging things like indexing or by leveraging things like cloud elasticity, where you don't have to buy capacity upfront. You can manage capacity as and when needed. Those are two big things in my mind, in the data area. Maybe there is more things about how teams organize and other things. I would let Venkat speak more or add more to this.

Kevin:

Yeah. Venkat, your thoughts, what can we look forward to?

Venkat:

Yeah. I think the first set of companies that once the three cloud providers are giving a really good infrastructure as a service. If I were to think, one of the four set of systems that moved over are kind of the warehouse in the lakes. It's like offline. It's a lot of data, it's very hard to manage. And I think basically them all moving to the cloud it gives births to very interesting companies that are not the Amazon or the Microsoft or Google. The next big movement that's going to happen to the cloud I think is a lot of IT applications, and IT operational applications, and business applications, and consumer applications. Also moving in great droves. These were very difficult to move because if you have a giant footprint in a big data center, it's very intricate, lots of dependencies and whatnot.

Venkat:

But I think what I'm expecting more is that, more and more traditional enterprises and more and more traditional ID application development and business operation tools, they are also going to go to the cloud and it's not going to be just about warehouses and data lakes. I think those dominate in the data space, at least. Those dominate the cloud expenses, if you think about it. If you take AWS like Redshift, how many people use Redshift and S3 is probably way, way larger than lots of other things. But I think there are other application development backends that our people think it's big. But still, the vast majority of IT, in my opinion, still runs on Oracle on-prem. I think the next big way of movement is going to be application development, moving away from legacy into the next generation of data backends that are kind of born in the cloud, built for the cloud. I think that's the biggest thing that I'm looking forward to.

Kevin:

That's great. Well, Venkat, Dhruba, was enjoyable, certainly talking with you both today. If you were listening, I hope you got something useful from this conversation. I certainly learned a lot from it, so thanks once again, Venkat and Dhruba. And all our listeners for being on this chat. Thank you.

Dhruba:

Thank you, Kevin.

Venkat:

Thank you.

Dhruba:

Bye.

Venkat:

Thanks, Kevin.


Recommended Webinars