Having a fast and responsive app is orthogonal to “knowing your big Os”. Unfortunately, most tech companies over-emphasize algorithms in interviews and downplay systems knowledge, and I believe that’s one reason behind sluggish apps and bloated systems.

I’ve seen this play out repeatedly. Interviewers ask a LeetCode-style coding question, which is then followed by the ritual of discussing time and memory complexity. Candidates ace the answers. But then… their “real” code suffers from subtle yet impactful performance problems.

  • pnutzh4x0r@lemmy.ndlug.orgOP
    link
    fedilink
    English
    arrow-up
    29
    ·
    1 year ago

    I think this is the author being humble. jmmv is a long time NetBSD and FreeBSD contributor (tmpfs, ATF, pkg_comp), has worked as a SRE at Google, and has been a developer on projects such as Bazel (build infrastructure). They probably know a thing or two about performance.

    Regarding the overall point of the blog, I agree with jmmv. Big O is a measure of efficiency at scale, not a measure of performance.

    As someone who teaches Data Structures and Systems Programming courses, I demonstrate this to students early on by showing them multiple solutions to a problem such as how to detect duplicates in a stream of input. After analyzing the time and space complexities of the different solutions, we run it the programs and measure the time. It turns out that the O(nlogn) version using sorting can beat out the O(n) version due to cache locality and how memory actually works.

    Big O is a useful tool, but it doesn’t directly translate to performance. Understanding how systems work is a lot more useful and important if you really care about optimization and performance.

    • chunkystyles
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      Big O is a useful tool, but it doesn’t directly translate to performance. Understanding how systems work is a lot more useful and important if you really care about optimization and performance.

      This is something I learned pretty early on as a professional developer. I got a computer science degree and was taught data structures, algorithms, and big O. In my first job, I came across a small piece of Java code that was being run a lot that had a small list being searched each time. I figured converting to a HashSet would be faster, but in testing it was actually slower. I forget the exact details, but I learned to test my assumptions about performance and not to just blindly change things.

      My degree, for the most part, did not prepare me for working with large, interconnected systems. The only course that came close was an elective called “concurrent programming” which was really just “algorithms, but parallel”.