Featured Service: LogDNA
Each week, we’re going to be featuring one of our partners to talk about their service, their story, and what brought to join Manifold.
LogDNA is a cloud-based log management system that allows you to aggregate all system and application logs into one efficient platform. It’s a tool that is greatly needed by engineering and DevOps teams across the board. They provide an easy way for teams to get information about their application’s infrastructure to help avoid outages and lessen outage times that can have an impact on a business’s customers.
Although their service provides tremendous value to businesses by alleviating the impact of issues, the team didn’t begin with this. When the founders were accepted in Y Combinator, they realized that they had built an internal tool that had huge potential. So, they pivoted. Since then, they’ve raised $1.3M in funding to leverage machine learning in identifying issues in IT log data.
Sitara: What made you decide to partner with Manifold?
Chris: The team loved Manifold’s vision of a one-stop shop approach to purchasing dev tools. A single dashboard to manage developer software made total sense when we were approached with the opportunity to create a LogDNA integration.
S: What are the characteristics of a great ‘log’?
C: It’s not just about logging everything, the more important aspect is determining what is noise vs. useful information. Great logging to us means the ability to identify which logs to store, and how the stored data is structured, like the way we pick up JSON. This provides the ability to search historical data in a precise and efficient manner and makes it easy to create the right dashboard to monitor abnormal behavior.
S: What does minimal viable logging look like for developers?
C: The first step is just simply to start logging something. Once you’ve added your first log statement, start adding logging statements wherever you think that diagnostic information might come in handy someday. When there’s a fire, you will thank yourself for erring on the side of caution. As you instrument your code with logs and fight fires, you will naturally develop an intuition that will lead to writing better log statements. When all else fails, you can log your objects as JSON, and we’ll actually parse the fields so you can search directly on them.
“Logs are not just for fighting fires, but also for developing new code.”
S: How soon should you introduce logging into a project or, what’s the last possible moment you’d want to introduce logging into a project?
C: You should start thinking about logging the moment you start writing code. Logs are not just for fighting fires, but also for developing new code. When the product is in the hands of actual beta users or customers, new things can go (and will) go wrong, so having those log statements left over from development. The last possible moment to instrument your code with logs would have to be just before deploying to production. Because once you deploy, the clock starts ticking, and if a fire occurs, there are no logs to help you until you deploy new code.
The last possible moment to introduce logs is when you’re fully in live beta and customers are using your product. Those are the moments you want to move fast and iterate your product based on feedback, not trying to keep the lights on because you’re busy debugging issues that could have been prevented.
S: Do modern architectures like microservices/serverless make logging more or less important?
C: In the past, it was common to simply ssh into a network machine and manually view the log files. However, with the rapid adoption of microservices architectures, it becomes unfeasible to log into each of the hundreds (if not thousands) of microservices involved in any given infrastructure incident. This is where cloud logging aggregation comes in.
Once your logs are aggregated, it becomes a question of how fast you can retrieve the results you’re looking for. This comes down to two factors: (i) the specificity of the logging statements you instrumented, and (ii) the search speed of the logging platform, hence our own emphasis on providing lightning fast search.
S: How is machine learning impacting logging?
C: We’re actually in the early stages of exploring the applications of machine learning / artificial intelligence. Today, we take a reactive approach and interact with the logs when something is down, broken or misbehaving. In the future, machine learning will be used to predict issues early on, giving responders more time to troubleshoot. Our ultimate goal is to apply machine learning to not only predict issues before they happen, but to take preventative action before an issue becomes a problem.
S: Do you see the way developers integrate logging into their applications changing dramatically over the next 2–5 years?
C: To be honest, the way logging statements are constructed has not changed dramatically over the last couple of decades. Really, the sheer increase in magnitude of logging volume is what will necessitate the change in how we use logs. Right now, the widespread use of cloud logging aggregation has at least provided the ability to easily send all of your logs to a central location. However, consuming aggregated logs has historically been a painful experience, due to how long it takes to return results and find the key piece of information needed to take action. As logging tools are able to automatically surface more and more logging insights, developers will begin to see more value in logging. This, in turn, will spur developers to naturally add more logging statements as a result of seeing the direct correlation between the helpfulness of the insight, and the quality of their logging.