• 2 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle
  • The problem, I believe, is that stable diffusion presently only supports Python 3.10, but Arch ships 3.12, and some of the dependencies aren’t compatible with the newer version. Here’s what I did to get it working on Arch + AMD 7800XT GPU.

    1. Install python310 package from AUR
    2. Manually create the virtualenv for stable diffusion with python3.10 -m venv venv (in stable diffusion root directory)

    This should be enough for the dependencies to install correctly. To get GPU acceleration to work, I also had to add this environment variable: HSA_OVERRIDE_GFX_VERSION=11.0.0 (Not sure if this is needed or if the value is same for 7900 XTX)




  • pavunkissatoLinux@lemmy.mlFlatpack, appimage, snaps..
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    11 months ago

    This was my experience as well as a developer trying to package an application as an appimage. Creating an appimage that works on your machine is easy. Creating one that actually works on other distros can be damn near impossible unless everything is statically linked and self contained in the first place. In contrast, flatpak’s developer experience is much easier and if it runs, you can be pretty sure it runs elsewhere as well.


  • Tämä tuntuu jotenkin kuin olisi suoraan jostain vanhasta scifi satiirista. Eletään maailmassa jossa työnteko vie alati kasvavan osan ajasta. Ihmiset eristetään toisistaan ja sanotaan, että he ovat rationaalisia yksilöitä jotka tekevät vain itsenäisiä päätöksiä. Kanssakäyminen korvataan transaktionaalisilla palveluilla ja muodollisilla rajapinnoilla. Sitten ihmetellään miksi ihmiset ovat yksinäisiä ja ratkaisuksi tarjotaan chattibottia jolle voi puhua ongelmistaan!


  • If I recall, Enlightenment used to have a rather focal fan base at one time. The DE was a lot prettier than most of its contemporaries, and was relatively lightweight despite having animated effects and everything. I always thought EFL was one of the hidden gems of the Linux ecosystem that was left in GTKs and Qts shadow, but after reading the article (back when it was first published) I realized there was probably a good reason it never got popular. I thought the story was embellished, as thedailywtf articles typically are, with the “SPANK! SPANK! SPANK! Naughty programmer!” stuff, so I downloaded EFL source code and checked. OMG, it was a real error message. (Though I believe it has since been removed.)

    The company in question using EFL was (probably) Samsung, who apparently still uses it as the native graphical toolkit for Tizen.



  • That is a good point to emphasize. A downside of a CLA is that it adds a bit of bureaucracy and may deter some contributors. If the primary concern is whether a GPL licensed app is publishable on an App Store, an alternative is to add an app store exception clause to the license. (The GPL allows optional extra clauses to make the license more permissive.) Though this means that while your code can be incorporated to other GPL licensed applications, you can’t take code from other GPL projects that don’t have the same exception.


  • As others have already said, the prohibition of using the code in commercial applications would make the license not open source/free software (as defined by the Free Software Foundation and Open Source Initiative.)

    These are some of the most commonly used licenses:

    • MIT - a very permissive license. Roughly says “do anything with this as long as you give attribution”
    • BSD - similar to MIT (note that there are multiple versions of the BSD license)
    • ASL2 - another permissive license. Major difference is that it also includes a patent grant clause. (Mini rant: I often hear that GPL3’s patent clause is the reason big companies don’t like it. Yet, ASL2 has the very same clause and it’s Google’s favored license.)
    • GPL - the most popular copyleft license (family). Requires derived works to be licensed under the same terms.
    • LGPL - a variant of the GPL that permits dynamic linking to differently licensed works. Mainly useful for libraries.
    • AGPL - a variant of GPL that specifies that making the software available over a network counts as distribution. (Works around the SaaS loophole. Mainly used for server applications.)
    • Mozilla - a hybrid permissive/copyleft license. I don’t fully understand how this one works.

    If you want to use a true FLOSS license and your goal is to discourage people from selling it, I’d say the GPL is your best bet. Legit vendors who don’t want to give out their source code won’t touch GPL code. The non-legit ones won’t care no matter what license you choose. Also, iOS App Store terms are not compatible with the GPL so they can’t release their stuff there, but you can as long as you hold full copyright to your application.



  • My impression about Matter was too that it is not “done” yet and device support is poor. On the other hand you read at every corner that it will be the future.

    This is my impression as well. I’m keeping an eye on how this space develops and I’ll probably buy a second dongle just for Thread when I need it (i.e. when some product I really want comes out that only supports Thread.) I believe most zigbee dongles are theoretically capable of supporting Thread, since they both share the same physical layer protocol.

    I’m curious to hear people’s experiences with Thread/Matter devices. Ideally, I’d like to use my HA box as the border router and configure it to not allow any external Internet connections. Will this break any functionality on devices with a Matter logo on them? Ideally it shouldn’t, but given the track record of manufacturers so far, my expectations are low.


  • I use zigbee2mqtt myself and I’ve been very happy with it. I haven’t tried ZHA, but I believe z2m supports more devices. (I use z2m’s supported devices list to choose which ones to buy.) The downside is that it’s a bit more work to set up initially, as you need an MQTT broker as well. But in return, I feel like z2m is more reliable since it runs (and is updated) separate from HA core. I use it with a zzh! dongle and even though I got one of the bad ones with a faulty amplifier chip, it’s been rock solid.

    As for Thread(+Matter), I’m waiting for things to settle down. Support in HA is still experimental and there are very few products out yet that use Thread. I’ll probably prefer Zigbee for as long as they sell them so all my devices will share the same mesh. Also, unlike Zigbee, Thread devices are not guaranteed to be local-only, which is my biggest worry. Thread/Matter won’t free us from having to check a device compatibility list before buying.


  • This is my chief worry with Thread. Zigbee is guaranteed to be local only, but if they switch over to Thread, the individual bulbs will be able to call home, even if they expose some of their functionality locally via Matter. With home assistant, one can probably configure their Thread Border Router to not allow internet access, but I have a suspicion a lot of supposedly local thread/matter devices will be designed with the assumption that they have cloud access and won’t function fully if firewalled.





  • pavunkissatohomeassistant@lemmy.worldCamera suggestions
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Adding to this as I’m also interested. I’m currently looking at cameras recommended in the Frigate wiki, since any camera that works well in Frigate also ought to work well in HA. One interesting thing I’ve noted is that some of the Hikvision and Dahua models have onboard AI features for object recognition. Does anyone have experience with these? Can they report these events back to home assistant and are they worth using?


  • One thing I’m curious about: Do you measure the idle power consumption of your NUC and does it really drop down to 6W? Because with a Hypervisor installed I would assume that it never really goes into „idle“ hence the resources are constantly bound.

    I used a power metering plug to measure the consumption and it showed around 6W when no VMs were running. I think it’s probably higher now with HA online, as my UPS is showing a 5W increase over when the Pi was plugged in. (The UPS always shows a higher number than the power meter though, so I’m not sure which one to trust.) If the new figures are correct, the NUC appears to be using 10 watts with HA on. I’ll have to see if setting the CPU frequency governor to powersaving mode has any effect.


  • I considered bare metal HASSOS too and would have gone that route if HA were the only thing I was planning on running. Another option would have been to install a linux distro and run HA in docker, but having HA in its own separate VM means I don’t need to worry about accidentally breaking it when I’m messing around with other services.

    Now, having written this, I realize that there would have been some real advantages in running HA in docker on a bare metal OS. For one, it would have made running Frigate easier, as its documentation recommends against running it in a VM.