• Machindo@lemmy.ml
    link
    fedilink
    arrow-up
    39
    ·
    6 months ago

    I’m running Grafana Loki for my company now and I’ll never go back to anything else. Loki acts like grep, is blazing fast and low maintenance. If it sounds like magic it kind is.


    I saw this post and genuinely thought one of my teammates wrote it.

    I had to manage an ELK stack and it was a full time job when we were supposed to be focusing on other important SRE work.

    Then we switched to Loki + Grafana and it’s been amazing. Loki is literally k8s wide grep by default but then has an amazing query language for filtering and transforming logs into tables or even doing Prometheus style queries on top of a log query which gives you a graph.

    Managing Loki is super simple because it makes the trade off of not indexing anything other than the kubernetes labels, which are always going to be the same regardless of the app. And retention is just a breeze since all the data is stored in a bucket and not on the cluster.

    Sorry for gushing about Loki but I genuinely was that rage wojak before we switched. I am so much happier now.

    • Jo Miran@lemmy.ml
      link
      fedilink
      arrow-up
      12
      ·
      6 months ago

      We do Grafana + Prometheus for most of our clients but I think that adding Loki into the mix might be necessary. The amount of clients that are missing basic events like “you’ve run out of disk space…two days ago”, is too damn high.

      • darvit@lemmy.darvit.nl
        link
        fedilink
        arrow-up
        7
        ·
        6 months ago

        Sounds like you need an alert/monitoring system and not a logging system. Something like nagios where you immediately get an alert if something is past its limits, and where you don’t have to rely on logging.

        • Jo Miran@lemmy.ml
          link
          fedilink
          arrow-up
          5
          ·
          6 months ago

          Preaching to the choir. They hire use to performance tune their app but then their IT staff manges to not notice the most basic things.

      • Machindo@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        I would add Alertmanager to your stack if you haven’t already. It’s pretty tightly integrated with prometheus. There’s some canned alerting rules based on predicting disk space full in X number of days. We wire Alertmanager to Pagerduty.

      • dan@upvote.au
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        The amount of clients that are missing basic events like "you’ve run out of disk space

        For my personal servers, I use Netdata for this. Works pretty well.

    • jelloeater - Ops Mgr@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      Get DataDog if you can afford it. Shit magic. NewRelic is nice too, and cheaper. I used to use GreyLog and it was ok, Loki def was less work to maintain.

      • thirteene@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        Datadog logs are basically in beta. You can send them synthetics apm and rum but I would be interested in spinning up my own private greylog instance to get away from DD logs

          • thirteene@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            It’s released but it’s insane feature light, has massive injestion problems, requires massive collection overhead and doesn’t have a fraction of splunks indexing. And it’s using the standard dd UI and I personally dont like. Logs aren’t metrics, they need a different interface.

  • flamingo_pinyata
    link
    fedilink
    arrow-up
    35
    arrow-down
    4
    ·
    6 months ago

    Good luck connecting to each of the 36 pods and grepping the file over and over again

    • whodatdair@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      6 months ago

      for X in $(seq -f host%02g 1 9); do echo $X; ssh -q $X “grep the shit”; done

      :)

      But yeah fair, I do actually use a big data stack for log monitoring and searching… it’s just way more usable haha

    • NovaPrime@lemmy.ml
      link
      fedilink
      arrow-up
      6
      ·
      6 months ago

      Stern has been around for ever. You could also just use a shared label selector with kubectl logs and then grep from there. You make it sound difficult if not impossible, but it’s not. Combine it with egrep and you can pretty much do anything you want right there on the CLI

    • brokenlcd@feddit.it
      link
      fedilink
      arrow-up
      5
      ·
      6 months ago

      I don’t know how k8s works; but if there is a way to execute just one command in a container and then exit out of it like chroot; wouldn’t it be possible to just use xargs with a list of the container names?

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      6 months ago

      Let me introduce you to syslogd.

      But well, it’s probably overkill, and you almost certainly just need to log on a shared volume.

      • dan@upvote.au
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        Syslog isn’t really overkill IMO. It’s pretty easy to configure it to log to a remote server, and to split particular log types or sources into different files. It’s a decent abstraction - your app that logs to syslog doesn’t have to know where the logs are going.

    • FrederikNJS@lemm.ee
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      6 months ago

      Since you are talking about pods, you are obviously emitting all your logs on stdout and stderr, and you have of course also labeled your pods nicely, so grepping all 36 pods is as easy as kubectl logs -l <label-key>=<label-value> | grep <search-term>

    • SeattleRain@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      This is what I was thinking. And you can’t really graph out things over time on a graph which is really critical for a lot of workflows.

      I get that Splunk and Elastic or unwieldy beasts that take way too much maintenance for what they provide for many orgs but to think grep is replacement is kinda crazy.

    • douglasg14b@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      6 months ago

      Yeah, ofc it is.

      I’m working in a system that generates 750 MILLION non-debug log messages a day (And this isn’t even as many as others).

      Good luck grepping that, or making heads or tails of what you need.

      We put a lot of work into making the process of digging through logs easier. The absolute minimum we can do it dump it into elastic so it’s available in Kibana.

      Similarly, in a K8 env you need to get logs off of your pods, ASAP, because pods are transient, disposable. There is no guarantee that a particular pod will live long enough to have introspectable logs on that particular instance (of course there is some log aggregation available in your environment that you could grep. But they actually usefulness of it is questionable especially if you don’t know what you need to grep for).

      These are dozens, hundreds, more problems that crop up as you scale the number of systems and people working on those systems.

  • themoonisacheese@sh.itjust.works
    link
    fedilink
    arrow-up
    16
    ·
    6 months ago

    I used to work for a very very large company and there, a team of 9 people and I’s entire jobs was ensuring that the shitty qradar stack kept running (it did not want to do so). I would like to make abundantly clear that our job was not to use this stack at all, simply to keep it running. Using it was another team’s job.

    • Swedneck@discuss.tchncs.de
      link
      fedilink
      arrow-up
      8
      ·
      6 months ago

      remember this shit when people talk about how we can’t just give people money for doing nothing

      we’re already just inventing problems for people to fix so we can justify paying them

  • xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    16
    ·
    6 months ago

    Why grep log files when I can instead force corporate to pay a fuck ton of money for a Splunk license.

  • UnfortunateShort@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    6 months ago

    The middle thing is not what normies do, it is what enterprises do, because they have other needs than just knowing ‘error where?’

    • nik9000@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      6 months ago

      Do folks still use logstash here? Filebeat and ES gets you pretty far. I’ve never been deep in ops land though.

  • RoadieRich@midwest.social
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 months ago

    As someone who used to troubleshoot an extremely complex system for my day job, I can say I’ve worked my way across the entire bell curve.

  • 9point6@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    4
    ·
    edit-2
    6 months ago

    Good tracing & monitoring means you should basically never need to look at logs.

    Pipe them all into a dumb S3 bucket with less than a week retention and grep away for that one time out of 1000 when you didn’t put enough info on the trace or fire enough metrics. Remove redundant logs that are covered by traces and metrics to keep costs down (or at least drop them to debug log level and only store info & up if they’re helpful during local dev).

      • 9point6@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        6 months ago

        Well I didn’t say anything about perfectly clean, but I agree, it’s very nice to work on my current projects which we’ve set up our observability to modern standards when compared to any of the log vomiting services I’ve worked on in the past.

        Obviously easier to start with everything set up nicely in a Greenfield project, but don’t let perfect be the enemy of good—iterative improvements on badly designed observability nearly always pays off.

  • Tryptaminev@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    6 months ago

    Please excuse my ignorance, but what is grep, what are the do’s and dont’s of logging and why are people here talking about having an entire team maintain some pipeline just to handle logs?

  • DrM@feddit.de
    link
    fedilink
    arrow-up
    2
    ·
    6 months ago

    Just using fluentd to push the files into an ElasticSearch DB and using Kibana as frontend is one day of work for a kubernetes admin and it works good enough (and way better than grepping logfiles from every of the 3000 pods running in a big cluster)

  • macniel@feddit.de
    link
    fedilink
    arrow-up
    5
    arrow-down
    3
    ·
    6 months ago

    Hmm but Kibana makes it easier to read and parse logs. And you don’t need server permissions to do it.

    • DudeDudenson@lemmings.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      2
      ·
      6 months ago

      I’m not sure if you’re serious or not.

      At my job they unilaterally decided that we no longer had access to our application logs in any way other than a single company wide grafana with no access control (which means anyone can see anything and seeing the stats and logs of only your stuff is a PITA).

      Half the time the relevant log línes straight up don’t show up unless you use a explicit search for their content (good luck finding relevant information for an unknown error) and you’re extremely limited in how many log línes you can see at once.

      Not to mention that none of our applications were designed with this platform in mind so all the logging is done in a legacy way that conforms to the idea of just grepping a log file and there’s no way the sponsors will commit to letting us spend weeks adjusting our legacy applications to actually log in a way that is useful for viewing in grafana and not a complete shitshow.

      I’ve worked with a logstash/elastic/kibana stack for years before this job and I can tell you these solutions aren’t meant for seeing lines one by one or context searches (where seeing what happened right before and after matters a lot), they’re meant for aggregations and analysis.

      It’s like moving all your stuff from one house to another in a tiny electric car. Sure technically it can be done but that’s not it’s purpose at all and good luck moving your fridge.

      • thesmokingman@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        6 months ago

        Are you sure it was set up correctly before? Kibana is the tool I’ve provisioned for dev log access for years so I don’t have to give them k8s perms. I have trained teams on debugging via Kibana and used Kibana myself for figuring out where prod errors were happening.

        Your first paragraph is super shitty devX. That’s not okay. Your penultimate paragraph is really what I’m asking about.

      • douglasg14b@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        Ok…

        So your point is that a bad logging implementation is bad. And I agree.

        I’m not seeing how that’s extendable to implementations as a whole. You’re conflating your bad experience with "log aggregation is bad’.

        Just because your company sucks at this doesn’t mean everyone else’s does.

      • Evotech@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        You can easily access raw live output from any source in kibana if you want to for observability