I was thinking of setting up a home surveilance system using Frigate, and integrating it with Home Assistant. I’d probably have somewhere on the order of 10-15 1080p 30fps cameras. I’m not sure what components I should get for the server, as I am unsure of the actual processing requirements.

EDIT 1: For some extra information, I did find that Frigate has a recommended hardware page.

  • trankillity@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    1 year ago

    That’s quite a few cameras. I would do an audit on how many you will actually need first, because you will likely find you could get by with 5-10.

    In terms of what you’ll need - any Intel chip that supports QuickSync will likely do for the main ffmpeg processing of the image, but you will definitely want a Google Coral TPU. If you do end up needing 10-15 cameras, you may end up needing the M2 with dual TPU version of the Coral. You will also want some form of reliable storage for your clips (NAS local or NFS), as well as the ability to back up those clips/shots to the cloud somewhere.

    I’m personally running 4 cameras (3x1080 @ 15fps, 1x4k @ 25fps) through my ~7 year old Synology DS418play NAS using Surveillance Station as the first ingestion point, then restreaming from there to Frigate. Now that Surveillance Station can accept external events via webhook, I may look to swap the direction, and ingest into Frigate first, then restream out to Surveillance Station for long-term storage.

    “Why not directly use Frigate?” I hear you ask. Mostly because Frigate is pretty static. It’s all set up via YAML with no config UI currently, whereas I can tweak stuff on Surveillance Station quite easily.

    • Kalcifer@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      That’s quite a few cameras. I would do an audit on how many you will actually need first, because you will likely find you could get by with 5-10.

      That’s a fair point. I haven’t actually methodically gone through to see exactly how many I would need just yet. The numbers that I chose were somewhat just ballpark off the top of my head.

      You will also want some form of reliable storage for your clips

      I am planning to give the camera server dedicated storage for the data. If I’m really feeling like splurging on it, I may look into getting WD Purple drives, or the like.

      as well as the ability to back up those clips/shots to the cloud somewhere.

      I’m not sure that I would need this very much. I’m mostly interested in a sort of ephemeral surveilance system; I only really need to store, at most, a few days, and then rewrite over it all.

      I’m personally running 4 cameras (3x1080 @ 15fps, 1x4k @ 25fps) through my ~7 year old Synology DS418play NAS

      Would you say that 15FPS is a good framerate for surveilance? Or could one get away with even less to lessen the resource requirements?

      whereas I can tweak stuff on Surveillance Station quite easily.

      What tweaking do you generally need to do for the camera server?

      • m1st3r2@butts.international
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Would you say that 15FPS is a good framerate for surveilance? Or could one get away with even less to lessen the resource requirements?

        If doing CPU-based motion analysis, you could use a lower quality stream (if available from the cameras to avoid transcoding load) for motion detection, then use that to trigger recording on a higher quality stream.

        • Kalcifer@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          you could use a lower quality stream (…) for motion detection, then use that to trigger recording on a higher quality stream.

          Brilliant idea! Thank you for the suggestion!

          If doing CPU-based motion analysis

          Whyd do you specifically mention CPU-based motion analysis? Does this idea not work with the Google Coral TPU, for example?

          • m1st3r2@butts.international
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I’m using ZoneMinder on my end with more rudimentary motion detection, hence CPU detection. (My current hardware is pre-IOMMU on the mobo, so no pci passthrough for me…)

            That said, if you have hardware that can handle X (via CPU, GPU, TPU, etc), then you gotta decide how you want to spend that. Whether resources are spent more in analysis fps or evaluating higher detail frames is up to what you need.

      • trankillity@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’m not sure that I would need this very much. I’m mostly interested in a sort of ephemeral surveilance system; I only really need to store, at most, a few days, and then rewrite over it all.

        This is exactly what I do. I simply cloud backup any event/object clips but only retain last 5 days. The cloud is if law enforcement needs it, or in the event of hardware failure/catastrophic house damage.

        What tweaking do you generally need to do for the camera server?

        Recording schedules change based on time of day/when we’re in/out of the house. This is all handled as automations through Home Assistant, but is set up through Surveillance Station NVR.

      • trankillity@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Just reporting back that I did the work last night to change the ingestion order for my cameras. I’m now using the go2rtc component of frigate as the first ingestion point. That component is serving a restream to both Frigate and my NAS’ NVR. It’s working much better now, with less frame delay, and less CPU usage on the NAS.

  • lps2@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 year ago

    For Frigate especially, you are going to want to use multiple Coral TPUs to handle inferencing. I really can’t speak to the CPU requirements but I know a lot of people like using the Intel NUCs with mic tier processors.

  • ninjan@lemmy.mildgrim.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    The space requirements get super intense with many cameras like that unless you compress the video. And if you compress it then with so many you need the camera to do it on the fly otherwise your server needs to be really beefy to handle real time encoding of 10+ incoming video feeds. Also if the cameras don’t encode then the data flow would congest your network something fierce. Those requirements might push the cameras from cheap to not so cheap though (still far from expensive though imo)

    The biggest issue as I see it with so many cameras would be how to find interesting stuff in all that data. If it’s only surveillance then sure you can just retain like a week of feeds and make a vacation mode where you store enough to cover the whole vacation. But if you want to look at what your dogs do etc then trying to track them across 10+ cameras is going to be tricky without some software help, unsure if there is anything open source for that.

    • Kalcifer@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 year ago

      The space requirements get super intense with many cameras like that unless you compress the video.

      I think Frigate uses h264 if I remember correctly. Also I’m not planning on storing and archiving the recorded data. I most likely would only save a day or a couple days. You do raise a good point about vacations, though - I should probably have enough storage for possible vacations.

      Also if the cameras don’t encode then the data flow would congest your network something fierce.

      The newtork that the camera feeds would be flowing through would essentially be isolated from the rest of the network. I intend to hook the cameras up to a dedicated network switch, which would then be connected to the camera server.

      The biggest issue as I see it with so many cameras would be how to find interesting stuff in all that data.

      What’s nitce about Frigate, is that it uses OpenCV, and TensorFlow to analyze the video streams for moving objects.

      More Information can be found on Frigate’s website.