I thought I’ll make this thread for all of you out there who have questions but are afraid to ask them. This is your chance!
I’ll try my best to answer any questions here, but I hope others in the community will contribute too!
Mods, perhaps a weekly post like this would be beneficial? Lowering the bar to entry with some available support and helping to keep converts.
Agreed. @cypherpunks@lemmy.ml, I think this would be a great idea - making a weekly megathread for Linux questions, preferably also stickied for visibility.
Ok, I just stickied this post here, but I am not going to manage making a new one each week :)
I am an admin at lemmy.ml and was actually only added as a mod to this community so that my deletions would federate (because there was a bug where non-mod admin deletions weren’t federating a while ago). The other mods here are mostly inactive and most of the mod activity is by me and other admins.
Skimming your history here, you seem alright; would you like to be a mod of /c/linux@lemmy.ml ?
Please feel free to make me a mod too. I am not crazy active, but I think my modest contributions will help.
And I can make this kind of post on a biweekly or monthly basis :) I think weekly might be too often since the post frequency here isn’t crazy high
Ok, you and @d3Xt3r@lemmy.nz are both mods of /c/linux@lemmy.ml now. Thanks!
Thanks! Yep I mentioned you directly seeing as all the other other mods here are inactive. I’m on c/linux practically every day, so happy to manage the weekly stickies and help out with the moderation. :)
Yeah I was thinking the same. Perhaps make a sticky post about it once a week.
Is it difficult to keep your leg shaved and how many pairs of long socks do you have?
Subjectively: it is hard to keep my legs shaved
Objectively: there’s never enough programming socks
Don’t try to shave. Use hair removal creams instead. You get longer lasting results and the skin is actually free from hair stubbles.
I have 6 pairs.
MOAR SOCKS
OP. Gotta say that this thread is a brilliant idea!
Thank you 😄
inbox going brrr…
How do symlinks work from the point of view of software?
Imagine I have a file in my downloads folder called movie.mp4, and I have a symlink to it in my home folder.
Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it’s a filesystem level stuff and software (player) basically has no idea if a file I’m opening is a symlink or the original movie.mp4?
Can I use sync software (like Dropbox, Gdrive or whatever) to sync symlinks? Can I use sync software to sync actual files, but only have symlinks in my sync folder?
Is there a rule of thumb to predict how software behaves when dealing with symlinks?
I just don’t grok symbolic links.
A symlink works more closely to the first way you described it. The software opening a symlink has to actually follow it. It’s possible for a software to not follow the symlink (either intentionally or not).
So your sync software has to actually be able to follow symlinks. I’m not familiar with how gdrive and similar solutions work, but I know this is possible with something like rsync
An application can know that a file represents a soft link, but they don’t need to do anything differently to follow it. If the program just opens it, reads it, writes to it, etc, as though it were the original file, it will just worktm without them needing to do anything differently.
It is possible for the software to not follow a soft symlink intentionally, yes (if they don’t follow it unintentionally, that might be a bug).
As for hard links, I’m not as certain, but I think these need to be supported at the filesystem level (which is why they often have specific restrictions), and the application can’t tell the difference.
So I guess it’s something like pressing ctrl+c: most software doesn’t specifically handle this hotkey so in general it will interrupt a running process, but software can choose to handle it differently (like in vim ctrl+C does not interrupt it).
Thanks.
Fun fact: pressing X (close button) on a window does not make it that your app is closed, it just sends a signal that you wish to close it, your app can choose what to do with this signal.
A symlink is a file that contains a shortcut (text string that is automatically interpreted and followed by the operating system) reference to another file or directory in the system. It’s more or less like Windows shortcut.
If a symlink is deleted, its target remains unaffected. If the target is deleted, symlink still continues to point to non-existing file/directory. Symlinks can point to files or directories regardless of volume/partition (hardlinks can’t).
Different programs treat symlinks differently. Majority of software just treats them transparently and acts like they’re operating on a “real” file or directory. Sometimes this has unexpected results when they try to determine what the previous or current directory is.
There’s also software that needs to be “symlink aware” (like shells) and identify and manipulate them directly.
You can upload a symlink to Dropbox/Gdrive etc and it’ll appear as a normal file (probably just very small filesize), but it loses the ability to act like a shortcut, this is sometimes annoying if you use a cloud service for backups as it can create filename conflicts and you need to make sure it’s preserved as “symlink” when restored. Most backup software is “symlink aware”.
Software opens a symlink the same way as a regular file. The kernel reads a path stored in a symlink and then opens a file with that path (or returns a error if unable to do this for some reason). But if a program needs to perform specific actions on symlinks, it is able to check the file type and resolve symlink path.
To determine how some specific software handle symlinks, read its documentation. It may have settigs like “follow symlinks” or “don’t follow symlinks”.
ELI5: when a computer stores something like a file or a folder, it needs to know where it lives and where its contents are stored. Normally where the a file or folder lives is the same place as where its contents are. But there are times where a file may live in one place and its contents are elsewhere. That’s a symlink.
So for your video example, the original video is located in Downloads so the video file will say I am movie.mp4 and I live i live in downloads, and my contents are in downloads. While the symlink says, I am movie.mp4 I live in home, and my contents are in downloads over there.
For a video player, it doesn’t care if the file and the content is in the same place, it just need to know where the content lives.
Now how software will treat a symlink as an absolute. For example if you have 2 PCs synced with cloud storage, and both downloads and home is being synced between your 2 pcs. Your cloud storage will look at the symlink, access the content from pc1 and put your movie.mp4 in pc2’s downloads and home. But it will also put the contents in both places in pc2 since to it, the results are the same. One could make software sync without breaking the symlink, but it depends on the developer and the scope of the software.
Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it’s a filesystem level stuff and software (player) basically has no idea if a file I’m opening is a symlink or the original movie.mp4?
Others have answered well already, I just will say that symlinks work at the filesystem level, but the operating system is specially programmed to work with them. When a program asks the operating system to open a file at a given path, the OS will automatically “reference” the link, meaning it will detect a symlink and jump to the place where the symlink is pointing.
A program may choose to inspect whether a file is a symlink or not. By default, when a program opens a file, it simply allows the operating system to reference the file path for it.
But some apps that work on directories and files together (like “
find
”, “tar
”, “zip
”, or “git
”) do need to worry about symlinks, and will check if a path is a symlink before deciding whether to reference it. For example, you can ask the “find
” command to list only symlinks without referencing them:find -type l
deleted by creator
Symlinks are fully transparent for all software just opening the file etc.
If the software really cares about this (like file managers) they can simply ask the Linux kernel for additional information, like what type of file it is.
its a pointer.
E: Okay so someone downvoted “it’s a pointer”. Here goes. both hard links and symbolic links are pointers.
The hard link is a pointer to a spot on the block device, whereas the symbolic link is a pointer to the location in the filesystems list of shit.
That location in the filesystems list of shit is also a pointer.
So like if you have /var/2girls1cup.mov, and you click it, the os looks in the file system and sees that /var/2girls1cup.mov means 0x123456EF and it looks there to start reading data.
If you make a symlink to /var/2girls1cup.mov in /bin called “ls” then when you type “ls”, the os looks at the file in /bin/ls, sees that it points to /var/2girls1cup.mov, looks in the file system and sees that it’s at 0x123456EF and starts reading data there.
If you made a hard link in /bin called ls it would be a pointer to the location on the block device, 0x123456EF. You’d type “ls” and the os would look in the file system for /bin/ls, see that /bin/ls means 0x123456EF and start reading data from there.
Okay but who fucking cares? This is stupid!
If you made /bin/ls into /var/2girls1cup.mov with a symlink then you could use normal tools to work with it, looking at where it points, it’s attributes etc and like delete just the link or fully follow (dereference) the link and delete all the links in the chain including the last one which is the filesystems pointer to 0x123456EF called /var/2girls1cup.mov in our example.
If you made /bin/ls into a hardlink to 0x123456EF, then when you did stuff to it the os wouldn’t know it’s also called /var/2girls1cup.mov and when /bin/ls didn’t work as expected you’d have to diff the output of mediainfo on both files to see that it’s the same thing and then look where on the hard drive /var/2girls1cup.mov and /bin/ls point to and compare em to see oh, someone replaced my ls with a shock video using a hard link.
When you delete the /bin/ls hardlink, the os deletes the entry in the file system pointing to 0x123456EF and you are able to put normal /bin/ls back again. Deleting the hard link wouldn’t actually remove the data that comprises that file off the drive because “deleting” a “file” is just removing the file systems record that there’s something there to be aware of.
If instead of deleting the /bin/ls hardlink, you opened it up and replaced the video portion of its data with the music video to never gonna give you up, then when someone tried to open /var/2girls1cup.mov they’d instead see that music video.
if that is, the file wasn’t moved to another place on the block device when you changed it. Never gonna give you up has a much longer running time than 2girls1cup and without significant compression the os is gonna end up putting /bin/ls in a different place in the block device that can accommodate the longer data stream. If the os does that when you get done modifying your 2girls1cup /bin/ls into rickroll then /bin/ls will point to 0x654321EF or something and only you will experience astleys dulcet tones when you use ls, the old 0x123456EF location will still contain the data that /var/2girls1cup.mov is meant to point to and you will have played yourself.
Okay with all that said: how does the os know what to do when one of its standard utilities encounters a symlink? They have a standard behavior! It’s usually to “follow” (dereference) the link. What the fuck good would a symbolic link be if it didn’t get treated normally? Sometimes though, like with “ls” or “rm” you might want to see more information or just delete the link. In those cases you gotta look at how the software you’re trying to use treats links.
Or you can just make some directories and files with touch and try what you wanna do and see what happens, that’s what I do.
Is there a way to remove having to enter my password for everything?
Wake computer from Screensaver? Password.
Install something? Password.
Updates (biggest one. Updates should in my opinion just work without, because being up to date is important for security reasons)? Password.I understand sudo needs a password,but all the other stuff I just want off. The frequency is rediculous. I don’t ever leave my house with my computer, and I don’t want to enter a password for my wife everytime she wants to use it.
I understand sudo needs a password
You can configure sudo to not need a password for certain commands. Unfortunately the syntax and documentation for that is not easily readable. Doas which can be installed and used along side sudo is easier.
For software updates you can go for unattended-upgrades though if you turn off your computer when it is upgrading software you may have to fix the broken pieces.
I’ve tried unattended-upgrades once. And I couldn’t get it to work back then. It might be more user friendly now. Or it could just be me.
It’s not really user friendly, at least not how I know it. But useful for servers and when desktop computers are on for a long time. It would be a matter of enabling or disabling it with :
sudo dpkg-reconfigure unattended-upgrades
granted that you have the unattended-upgrades package installed. In that case I’m not sure when the background updates will start, though according to the Debian wiki the time for this can be configured.But with Ubuntu a desktop user should be able to configure software updated to be done automatically via a GUI. https://help.ubuntu.com/community/AutomaticSecurityUpdates#Using_GNOME_Update_Manager
I understand sudo needs a password,but all the other stuff I just want off.
Sudo doesn’t need a password, in fact I have it configured not to on the computers that don’t leave the house. To do this open
/etc/sudoers
file (or some file inside/etc/sudoers.d/
) and add a line like:nibodhika ALL=(ALL:ALL) NOPASSWD:ALL
You probably already have a similar one, either for your user or for a certain group (usually wheel), just need to add the
NOPASSWD
part.As for the other parts you can configure the computer to not lock the screen (just turn it off) and for updates it depends on distro/DE but having passwordless sudo allows you to update via the terminal without password (although it should be possible to configure the GUI to work passwordless too)
The things you listed can be customized.
Disable screen lock and it stops locking. This is a setting in gnome, probably in KDE, maybe in others.
Polkit can allow installing and updating in packagekit (like gnome software) without the password. I think this is default in Fedora, at least for the user marked as administrative. openSUSE actually has a gui for changing some of these privileges in the Security and Hardening settings.
Passwords are meant to protect against using privileged processes as the user. This comes from a very traditional multi-user system, where users should not touch the system.
If the actions that require authentication are supported by polkit (kde shows the ID when expanding the message) you can add a policy file in
/etc/polkit-1/rules.d/
You can configure this behavior for CLI, and by proxy could run GUI programs that require elevation through the CLI:
https://wiki.archlinux.org/title/Sudo#Using_visudo
Defaults passwd_timeout=0(avoids long running process/updates to timeout waiting for sudo password)
Defaults timestamp_type=global (This makes password typing and it’s expiry valid for ALL terminals, so you don’t need to type sudo’s password for everything you open after)
Defaults timestamp_timeout=10(change to any amount of minutes you wish)
The last one may be the difference between having to type the password every 5 minutes versus 1-2 times a day. Make sure you take security implications into account.
I think something like
%wheel ALL= NOPASSWD: /bin/apt
should be the right way of disabling the password for apt.
For wake from screensaver/sleep, this should be configurable. Your window manager is locking your session, so you probably just need to turn that option off.
For installations and updates, I suspect you’re used to Windows-style UAC where it just asks you Yes or No for admin access in a modal overlay. As I understand it, this is easier said than done on linux due to an insistence on never running GUI applications as admin, which makes sense given how responsibilities are divided and the security and technical challenges involved. I will say, I agree 100% that this is a serious area that’s lacking for linux, but I also (think I) understand why no one has implemented something similar to UAC. I’ll try to give the shortest version I can:
All programs (on both Windows and Linux) are run as a user. It’s always possible for any program to have a bug in it that gives another program to opportunity to exploit the bug to hijack that program, and start executing arbitrary, malicious code as that user. For this reason, the philosophical stance on all OSes is, if it’s gonna happen, let’s not give them admin access to the whole machine if we can avoid it, so let’s try to run as much as possible as an unprivileged user.
On linux, the kernel-level processes and admin (root-level) account are fundamentally detached from running anything graphical. This means that it’s very hard to securely, and generically, pop up a window with just a Yes or No box to grant admin-level permissions. You can’t trust the window manager, it’s also unprivileged, but even if you could, it might be designed in a supremely insecure way, and allow just any app with a window to see and interact with any other app’s windows (Xorg), so it’s not safe to just pop up a simple Yes/No box, because then any other unprivileged application could just request root permissions, and then click Yes itself before you even see it. Polkit is possible because even if another app can press OK, you still need to enter the password (it’s not clear to me how you avoid other unprivileged apps from seeing the keystrokes typed into the polkit prompt).
On windows, since the admin/kernel level stuff is so tightly tied to the specific GUI that a user will be using, it can overlay its own GUI on top of all the other windows, and securely pop in to just say, “hey, this app wants to run as admin, is that cool?” and no other app running in user mode even knows it’s happening, not even their own window manager which is also running unprivileged. The default setting of UAC is to just prompt Yes/No, but if you crank it to max security you get something like linux (prompt for the password every time), and if you crank it to lowest security you get something closer to what others are commenting (disable the prompt, run things as root, and cross your fingers that nothing sneaks in).
I do think that this is a big deal when it comes to the adoption of linux over windows, so I would like to see someone come up with a kernel module or whatever is needed to make it happen. If someone who knows linux better than me can correct me where I’m wrong, I’d love to learn more, but that is how I understand it currently.
Asking the real question here. I hope there is a one way solution per application. But I doubt it. I hope you don’t get the usual answer that it’s “absolutely necessary” for security.
These are all valid reasons to request a password 🤔
- Wake computer from Screensaver? Password.
Check your screen saver settings. Dunno which desktop environment you’re using. KDE should allow you to not enter a password for this.
- Install something? Password.
- Updates (biggest one. Updates should in my opinion just work without, because being up to date is important for security reasons)? Password.
Installing stuff runs
sudo
in the background hence the password prompt. Updates = installing stuff. Look up “passwordless sudo”. At this point, when do you even want a password to be shown? If you don’t need a password, get rid of it entirely.Anti Commercial AI thingy
At this point, when do you even want a password to be shown? If you don’t need a password, get rid of it entirely.
Do you still do this by just pressing enter when you change your password? (i.e. entering no password as your password)
Yep, using an empty password should work. They keyring will also need an empty password.
Anti Commercial AI thingy
Why does it feel that Linux infighting is the main reason why it never takes off? It’s always “distro X sucks”, “installing from Y is stupid”, “any system running Z should burn”
Linux generally has a higher (perceived?) technical barrier to entry so people who opt to go that route often have strong opinions on exactly what they want from it. Not to mention that technical discussions in general are often centered around decided what the “right” way to do a thing is. That said regardless of how the opinions are stated, options aren’t a bad thing.
This.
It is a ‘built-in’ social problem: Only people who care enough to switch to Linux do it, and this people are pre-selected to have strong opinions.
Exactly the same can be observed in all kind of alternative projects, for example alternative housing projects usually die because of infighting for everyone has their own definition of how it should work.
There’s no infighting. It just feels that way because you picked an inferior distribution.
Linux users are often very passionate about the software they put on their computers, so they tend to argue about it. I think the customization and choices scares off a lot of beginners, I think the main reason is lack of compatibility with Windows software out of the box. People generally want to use software they are used to.
Because you don’t have an in person user group and only interact online where the same person calling all mandrake users fetal alcohol syndrome babies doesn’t turn around and help those exact people figure out their smb.conf or trade sopranos episodes with them at the lan party.
Doesn’t feel like that to me. I’ll need to see evidence that that is the main reason. It could be but I just don’t see it.
I mean, Wayland is still a hot topic, as are snaps and flatpaks. Years ago it was how the GTK2 to GTK3 upgrade messed up Gnome (not unlike the python 2 to 3 upgrade), some hardcore people still want to fight against systemd. Maybe it’s just “the loud detractors”, dunno
Why would one be discouraged by the fact that people have options and opinions on them? That’s the part I’m not buying. I don’t disagree that people do in fact disagree and argue. I don’t know if I’d call it fighting. People being unreasonably aggressive about it are rare.
I for one am glad that people argue. It helps me explore different options without going through the effort of trying every single one myself.
I’m using wayland right now, but still use X11 sometimes. I love the discussion and different viewpoints. They are different protocols, with different strengths and weaknesses. People talking about it js a vitrue in my opinion
I can only use x11 myself. The drivers for Wayland on nvidia aren’t ready for prime time yet, my browser flickers and some games don’t render properly. I’m frankly surprised the KDE folks shipped it out
Being I’m on Mint Cinnamon and using an Nvidia card, I’ve never even tried to run Wayland on this machine. Seems to work okay on the little Lenovo I put Fedora GNOME on. X11 is still working remarkably well for me, and I’m looking forward to the new features in Wayland once the last few kinks are worked out with it.
I like the fact that I can exercise my difficulty with usage commitment by installing both and switching between them :D.
Wayland is so buttery smooth it feels like I just upgraded my computer for free…but I still get some window Z-fighting and screen recording problems and other weirdness.
I’m glad X11 is still there to fall back on, even if it really feels janky from an experience point of view now.
For me, it’s building software from source on musl. Just one more variable to contend with
deleted by creator
It did take off, just not so much on the Desktop. I think those infights are really just opinions and part of further development. Having choices might be a great part of the overall success.
just not so much on the Desktop
Unix already had a significant presence in server computers during the late 80s, migrating to Linux wasn’t a big jump. Besides, the price of zero is a lot more attractive when the alternative option costs several thousand dollars
the price of zero is a lot more attractive when the alternative option costs several thousand dollars
Dang, I WISH. Places that constantly beg for donations like public libraries and schools will have Windows-everything infrastructure “because market share”. (This is what I was told when I was interviewing for a library IT position)
They might have gotten “lucky” with a grant at some point, but having a bank of 30+ computers for test-taking that do nothing but run MS Access is a frivilous budget waste, and basically building your house on sand when those resources could go to, I dunno… paying teachers, maybe?
Licensing is weird especially in schools. It may very well be practically free for them to license. Or for very small numbers of computers they might be able to come out ahead by only needing to hire tech staff that are competent with Windows compared to the cost of staff competent with Linux. Put another way, in my IT degree program every single person in my graduating class was very competent as a Windows admin, but only a handful of us were any good with Linux (with a couple actively avoiding Linux for being different)
Convincing companies to switch to no name free software coming from Sun or Digital certainly was a big jump.
Only dweebs on social media fight over distros. Nobody who matters.
Have you ever seen any other software centered forum? It’s not different. That’s not the reason.
Why do programs install somewhere instead of asking me where to?
EDIT: Thank you all, well explained.
Someone already gave an answer, but the reason it’s done that way is because on Linux, generally programs don’t install themselves - a package manager installs them. Windows (outside of the windows store) just trusts programs to install themselves, and include their own uninstaller.
Because Linux and the programs themselves expect specific files to be placed in specific places, rather than bunch of files in a single program directory like you have in Windows or (hidden) MacOS.
If you compile programs yourself you can choose to put things in different places. Some software is also built to be more self contained, like the Linux binaries of Firefox.
Actually, windows puts 95% of it files in a single directory, and sometimes you get a surprise DLL in your \system[32] folder.
you install program A, it needs and installs libpotato then later you install program B that depends on libfries, and libfries depends on libpotato, however since you already have libpotato installed, only program B and libfries are installed The intelligence behind this is called a package manager
In windows when you install something, it usually installs itself as a standalone thing and complains/reaks when dependencies are not met - e.g having to install Visual C++ 2005-202x for games, JRE for java programs etc
instead of making you install everything that you need to run something complex, the package manager does this for you and keep tracks of where files are
and each package manager/distribution has an idea of where some files be stored
I wish every single app installed in the same directory. Would make life so much easier.
They do!
/bin
has the executables, and/usr/share
has everything else.They do! /bin has the executables, and /usr/share has everything else.
Apps and executables are similar but separate things. An app is concept used in GUI desktop environments. They are a user-friendly front end to one or more executable in
/usr/bin
that is presented by the desktop environment (or app launcher) as a single thing. On Linux these apps are usually defined in a.desktop
file. The apps installed by the Linux distribution’s package manager are typically in/usr/share/applications
, and each one points to one of the executables in/usr/bin
or/usr/libexec
. You could even have two different “apps” launch a single executable, but each one using different CLI arguments to give the appearance of different apps.The desktop environment you use might be reconfigured to display apps from multiple sources. You might also install apps from FlatHub, Lutris, Nix, Guix, or any of several other package managers. This is analogous to how in the CLI you need to set the “
PATH
” environment variable. If everything is configured properly (and that is not always the case), your desktop environment will show apps from all of these sources collected in the app launcher. Sometimes you have the same app installed by multiple sources, and you might wonder “why does Gnome shell show me OpenTTD twice?”For end users who install apps from multiple other sources besides the default app store, there is no easy solution, no one agreed-upon algorithm to keep things easy. Windows, Mac OS, and Android all have the same problem. But I have always felt that Linux (especially Guix OS) has the best solution, which is automated package management.
Not all. I’ve had apps install in opt, flatpaks install in var out of all places. Some apps install in /etc/share/applications
In
/etc
? Are you sure?/usr/share/applications
has your system-wide.desktop
files, (while.local/share/applications
has user-level ones, kinda analogous to installing a program toAppData
on Windows). And.desktop
files could be interpreted at a high level as an “app”, even though they’re really just a simple description of how to advertise and launch an application from a GUI of some kind.OK, that was wrong. I meant usr/share/applications. Still, more than one place.
The actual executables shouldn’t ever go in that folder though.
Typically packages installed through a package manager stick everything in their own folder in
/usr/lib
(for libs) and/usr/share
(for any other data). Then they either put their executables directly in/usr/bin
or symlink over to them.That last part is usually what results in things not living in a consistent place. A package might have something that qualifies as both an executable and a lib, so they store it in their lib folder, but symlink to it from bin. Or they might not have a lib folder, and just put everything in their share folder and symlink to it from bin.
There is also /sbin or /usr/sbin, for executables only available to the superuser.
Expanding on the other explanations. On Windows, it’s fairly common for applications to come with a copy of everything they use in the form of DLL files, and you end up with many copies of various versions of those.
On Linux, the package manager manages all of that. So if say, an app needs GTK, then the package manager makes sure GTK is also installed. And since your distribution’s package manager manages everything and mostly all from source code, you get a version of the app specifically compiled for that version of GTK the distribution provides.
So if we were to do it kind of the Windows way, it would very, very quickly become a mess because it’s not just one big self contained package you drop in
C:\Program Files
. Linux follows the FSH which roughly defines where things should be. Binaries go to/usr/bin
, libraries to/usr/lib
, shared files go to/usr/shared
. A bunch of those locations are somewhat special, for example .desktop files in/usr/share/applications
show up in the menu to launch them. That said Linux does have a location for big standalone packages: that’s usually/opt
.There’s advantages and inconveniences with both methods. The Linux way has the advantage of being able to update libraries for all apps at once, and reduce clutter and things are generally more organized. You can guess where an icon file will be located most of the time because they all go to the same place, usually with a naming convention as well.
Because dependencies. You also should not be installing things you download of the internet nor should you use install scripts.
The way you install software is your distros package manager or flatpak
different strokes.
windows comes from the personal computing world and retains a bunch of stuff from it to this very day for no good reason, in this case there used to be no guarantee that a particular installation target would have the target directory mapped in a consistent way so the installer would make a guess and give the user a chance to change it.
if that sounds stupid, it is. no one writes in assembly anymore, they target the OS and nowadays the OS will have a consistent set of folders to install stuff to. we all know where the program “should” be installed to already.
but it didn’t used to be like that in the PC world! used to be your computer wasn’t a fixed purpose windows computer from the jump, never to be anything else. there were different OSes that people would use regularly and even different DOS environments which a person could use to run programs under. Hard disks weren’t disks inside the machine, but big beige external disks that you’d plug up, set beside the computer and access after booting. in that setup where a programmer targeted DOS (if they cared about the execution environment at all and didn’t just write for the processor) it made sense to ask where someone was gonna want to install their software, and to what extent they’d even want to start dirtying up the media they paid good money for with some knuckleheads weird files from some goofy program on a stack of floppy disks.
linux comes from the unix world, where the question of where something installs is easy and straightforward: it installs in $PATH. what is $PATH? it’s where the os will look when you try to run something to see if it can run any program by that name. if a program isn’t installed in $PATH then when you type its’ name in and hit enter the computer won’t know what the hell youre talking about and you’ll have to type it’s whole ass location out and hit enter.
Why didn’t unix systems that linux imitates ask you where to install stuff? because usually it wasn’t your choice! linux was unix for personal computers and unix was run on systems that took up whole rooms with all sorts of equipment. you might be the user of that system but never have access to the room with all the spinning disks and flashing lights, stuck on a terminal dialing in over a serial line.
so the assumption was that you’d have a variable in your user environment that would say where things were installed but not that you’d have the ability to change it or even install things.
so why in a linux environment would you ever install anything outside of $PATH or even want to be sure where something’s installed at all?
even under linux it can be useful to do either. installing outside of path keeps programs from being accidentally autocompleted or invoked. installing in a particular component of $PATH ($PATH can be many directories!) lets you put serious business programs that demand maximum performance on faster media.
so why the hell won’t linux systems give you the option of installing in a specific location or outside of $PATH altogether?
they will, but unlike windows, they don’t ask you. unless you specifically ask to do that unique and very abnormal operation, they just do the usual thing. when you want to install weirdly you gotta dig into your package manager and packaging system. sometimes you unzip a package and change a line in a file then zip it back up and install from your modified version.
Maybe not a super beginner question, but what do awk and sed do and how do I use them?
This is 80% of my usage of awk and sed:
“ugh, I need the 4th column of this print out”:
command | awk '{print $4}'
Useful for getting pids out of a
ps
command you applied a bunch ofgrep
s to.”hm, if I change all ‘this’ to ‘that’ in the print out, I get what I want":
command | sed "s/this/that/g"
Useful for a lot of things, like “I need to change the urls in this to that” or whatever.
Basically the rest I have to look up.
I say that covers around 99% of the awk/sed I use.
I was gonna write 99%, but then I remember I also need capture groups quite often. That would make 99% I’d say
Awk lets you do operations based on patterns. You can make little scripts and mini programs with it.
Sed lets you edit streams.
Almost everything can be treated like a stream so with those two tools you have the power to do damn near everything ever.
If you’re gonna dive into sed and awk, I’d also highly recommend learning at least the basics of regular expressions. The book Mastering Regular Expressions has been tremendously helpful for me.
Edit: a letter. Stupid autocorrect.
Awk is a programming language designed for reading files line by line. It finds lines by a pattern and then runs an action on that line if the pattern matches. You can easily write a 1-line program on the command line and ask Awk to run that 1-line program on a file. Here is a program to count the number of “comment” lines in a script:
awk 'BEGIN{comment_count=0;} /^[[:space:]]*[#]/{comment_count++;} END{print(comment_count);}' file.sh
It is a good way to inspect the content of files, espcially log files or CSV files. But Awk can do some fairly complex file editing operations as well, like collating multiple files. It is a complete programming language.
Sed works similar to Awk, but it is much simplified, and designed mostly around CLI usage. The pattern language is similar to Awk, but the commands are usually just one or two letters representing actions like “print the line” or “copy the line to the in-memory buffer” or “dump the in-memory buffer to output.”
Probably a bit narrow, but my usecases:
- awk: modify STDIN before it goes to STDOUT. Example: only print the 3rd word for each line
- sed: run a regex on every line.
What is the system32 equivalent in linux
/bin, since that will include any basic programs (bash, ls, cd, etc.).
As in, the directory in which much of the operating system’s executable binaries are contained in?
They’ll be spread between /bin and /sbin, which might be symlinks to /usr/bin and /usr/sbin. Bonus points is /boot.
Don’t think there is.
system32 holds files that are in various places in Linux, because Windows often puts libraries with binaries and Linux shares them.
The bash in /bin depends on libraries in /lib for example.
There is no direct equivalent, system32 is just a collection of libraries, exes, and confs.
Some of what others have said is accurate, but to explain a bit further:
Longer explanation:
spoiler
system32 is just some folder name the MS engineers came up back in the day.
Linux on the other hand has many distros, many different contributors, and generally just encourages a … better … separation for types of files, imho
The linux filesystem is well defined if you are inclined to research more about it.
Understanding the core principals will make understanding virtually everything else about “linux” easier, imho.https://tldp.org/LDP/intro-linux/html/sect_03_01.html
tl;dr; “On a UNIX system, everything is a file; if something is not a file, it is a process.”
The basics:
- /bin - base level executables,
ls
,mv
, things like that - /sbin - super-level-only (root) executables,
parted
,reboot
, etc - /lib - Somewhat self-explanatory, holds libraries, lots of things put their libs here, including linux kernel modules,
/lib/modules/*
, similar tosystem32
’s function of holding critical libraries - /etc - Configuration lives here, generally speaking, /etc/<application name> can point you in the right direction, typically requires super-user (root) to edit
- /usr - “User installed” software, which can be a murky definition in today’s world, but lots of stuff ends up here for installed software, manuals, icon files, executables
Bonus:
- /opt - A special location, generally third-party, bundled-style software likes to use this, Java for instance, but historically some admins use it as the “company location”, meaning internally developed software would live there.
- /srv - Largely subjective, but myself and others I know use it for partitions that are outside the primary disk, for instance we use
/srv/db
for database volumes,/srv/www
for web-data volumes,/srv/Media
for large-file storage, etc, etc
For completeness:
- /home - You’ll find your user directories here, personally, this is my directory I backup, I don’t carry much more with me on most systems.
- /var - “Variable data”, basically meaning any data that will likely grow over time, eg:
/var/log
Oooh. I always wondered where I would put my docker bind shares in. I currently have them point to /Media but /srv makes so much more sense.
- /bin - base level executables,
For the memes:
sudo rm -rf /*
This deletes everything and is the most popular linux meme
The same “expected” functionality:
sudo rm -rf /bin/*
This deletes the main binaries. You kinda can recover here but I have never done it.
What is system32? Outdated 32bit binaries?
A weird catch-all folder for “most important Windows system stuff”. It’s not 32bit, just named like that in typical Windows fashion for backwards compatibility.
Would probably be
/usr
and/bin
, while some apps get installed to/opt
or even/local
or/var
/usr/lib or /usr/lib64 or /lib (some distros) or /lib64
Some things (like hosts file) are in /etc. /etc mostly contains configs.
I installed Debian today. I’m terrified to do anything. Is there a single button backup/restore I can depend on when I ultimately fuck this up?
timeshift is pretty good, but bootable btrfs snapshots are even better
These have both saved my ass on numerous occasions. Btrfs especially is pretty amazing.
You want a disk imager like clonezilla or something. If you’re not ready for that just show hidden files and copy your /home/your_username directory to a usb or something. That’s where all your files live.
I ran Linux in a vm and destroyed it about… 5 times. It allowed me to really get in and try everything. Once I rana command that removed everything, and I remember watching icons disappear as the destruction unfolded in front of me. It was kind of fun.
I have everything backed up and synced so it’s all fine. Just lots of reinstalling Thunderbird, Firefox, re logging into firefox sync, etc.
Once I stopped destroying everything I did a proper install and haven’t looked back.
This will be my 7th year on Linux now. And I have to say, it feels good to be free.
Install everything from store, and you should be fine. If you see a tutorial being too complicated, it is probably not worth following. Set your search engine to past year and see if there are better tutorials.
You might also want to consider atomic distros, they are much harder to mess up, and much easier to restore.
No I’m doing it to learn self hosting, I’m doing the hard stuff on purpose
Oh! in that case may I suggest yachts with docker containers? https://yacht.sh/
Everything on my homeserver is directly installed on the server, keeping them up-to-date is pretty annoying, and permission control is completely non-existent.
Since want to do things the hard way, I believe this can also be a good opportunity to do things in the “better” way (at least IMO).
Ah now that does look promising, I had settled on portainer but this yacht program looks very noob friendly! I’ll install it today and check it out! Cheers!
Portainer are great too! But yacht seems to be specifically designed for self-hosting.
Another perspective: Your question implies you want to try out things with Debian. If this assumption is correct, I would highly recommend you just create a virtual machine with qemu/libvirt and learn within this environments/try out things there before doing stuff ‘on the metal’.
Of course backups are always a good idea and once you got your feed wet you might want to learn about ‘Infrastructure as code’. Have fun!
That’s a fantastic suggestion and I’ve already been doing exactly this :) but, I’ve done it just enough to know that I’m really really good at breaking stuff, and I don’t want to wait to fully transition from windows. Hence the need for full system backups
Is there an Android emulator that you can actually game on? I’ve tried a number of them (Android x86, Genymotion, Waydroid), but none of them can install a multitude of games from the Google Play store. The one thing keeping me on Windows is Android emulation (I like having one or two idle games running at any given time).
Waydroid works, but there’s three main things you need to get things going to replicate a typical Android device:
- OpenGapps: For GApps/Play Store. You’ll also need to register your device to get an Android ID.
- Magisk: Mainly to pass SafetyNet / Play Integrity basic checks.
- libndk / libhoudini: For ARM > x86 translation. libndk works better on AMD.
- Widevine: (optional) L3 DRM for things that need it, eg Netflix
There are some automated scripts that can set this all up. I used this one in the past with some success.
Also, stay away from nVidia. From what I recall, it just doesn’t work, or there are other issues like crashes. But if you’re serious about Linux in general, then ditching nVidia is generally a good idea.
Finally, games that use anti-cheat can be a hit-or-miss (like Genshin Impact, which crashed when I last tried it). But that’s something that you may face on any emulator, I mean, any decent anti-cheat system would detect the usage of emulators.
I see. I knew most of the emulators lacked ARM support, which seemed to be the biggest issue, but this helps. Sadly, I have a 3080 and no money to buy a new card, so I stuck with nVidia for the foreseeable future. I’ll have to test this when I get time, though. Thanks.
You can try using
scrcpy
. It’s sort of a remote desktop for Android. You can see your phone’s screen on the PC and use mouse and keyboard with it.An nvidia GPU unfortunetly doesn’t work with Waydroid at all. You would have to use CPU rendering which won’t play any games. You might be able to use your CPUs iGPU if it has one.
By default waydroid uses an x86 android image. Most games are not build for x86.
I have followed this to run an arm android image. https://wiki.archlinux.org/title/Waydroid#ARM_Apps_Incompatible
With that, I was able to install all apps.
Try Android Studio
Most probably, no. I tried to run bluestacks on wine. Some game works, most of em don’t
NixOS. I don’t get what it really is or does? It’s a Linux distribution but with ceavets or something
It’s a distribution completely centered around the Nix package manager. This basically allows you to program how your system should look using one programming language. If you want an identical system, just copy that file and you’re set.
I remember that thr kernel didn’t had performance flags set and used, making NixOS not a nice Gaming platform.
Is this true? Can I fix it for myself easily?
Easily? I’ve heard it’s really time consuming to get it exactly how you like it but the same could be said about a lot of distros.
Are you talking about that vm.max_memory something?
Not sure how you’d change it in Nix exactly, but should be simple enough.Been gaming on nixos for a month or two and haven’t had any issues AFAICT
I never said it wont have issues. Just a few less FPS without noticing.
Honestly I get more preformace on nixos than I have with other distros
Interesting
I remember that thr kernel didn’t had performance flags set and used, making NixOS not a nice Gaming platform.
Is this true? Can I fix it for myself easily?
I remember that thr kernel didn’t had performance flags set and used, making NixOS not a nice Gaming platform.
Is this true? Can I fix it for myself easily?
I remember that thr kernel didn’t had performance flags set and used, making NixOS not a nice Gaming platform.
Is this true? Can I fix it for myself easily?
On Android, when an app needs something like camera or location or whatever, you have to give it permission. Why isn’t there something like this on Linux desktop? Or at least not by default when you install something through package manager.
Android apps are sandboxed by default while packages on Linux run with the users permission.
There is already something like this with Flatpak since it also sandboxes every installed program and only grants requested permissions.
Because it requires a very specific framework to be built from the ground up, and FDO doesn’t specify these. A lot of breakage would happen if were to shoehorn such changes into Linux suddenly. Android has many layers of security that they’re fundamentally different than that of the unix philosophy. That’s why Android, even if it’s based on Linux, it’s not really considered “a distro”.
It is technically doable, but that would require a unified method to call when an app needs camera, and that method will show the prompt.
This would technically require developers to rewrite their apps on linux, which is not happening anytime soon.
Fortunately, pipwire and xdg-portal is currently doing this work, like when you screen share on zoom using pipwire, a system prompt will pop up asking you for what app to share. Unlike on Windows, zoom cannot see your active windows when using this method, only the one that you choose to share.
Most application framework, including GTK and electron, are actively supporting pipwire and portal, so the future is bright.
There is a lot of work in improving security and usablity of linux sandbox, and it is already much better than Windows (maybe also better than macos?). I am confident, in 5 years, linux sandbox stack (flatpak, protal, pipewire) will be as secure and usable as on android and ios.
I’d love to just skip to “Linux being secure and running on my smartphone instead of Android” but we know how much an uphill battle that is hahaha.
It probably would end up being implemented though XDG portals
If I understand correctly pipwire is supposed to be the “portal” but for audio and videos.
But I believe camera portal is already there, using pipwire. All they need to add is a popup to request usage when the app needs it.
XDG portals is the standard interface that applications (should) use to do things on your system. It is most commonly associated with flatpaks and Wayland.
You could have pipewire as the back end but XDG portal implementation usually is controlled by the desktop.
Thanks for correcting me!
Sandboxing wasn’t considered during development of Linux. But recent development incorporates this practice and can be found for example in flatpaks.
Flatpaks get permission though XDG-portals. The difference is there are usually no popups
In the terminal, why can’t i paste a command that i have copied to the clipboard, with the regular Ctrl+V shortcut? I have to actually use the mouse and right click to then select paste.
(Using Mint cinnamon)
Why in Linux, Software uses a particular version of a library? Why not just say it’s dependent on that library regardless of version? It become pain in ass when you are using an ancient software it required old version of newer library so you have to create symlinks of every library to match old version.
I know that sometimes newer version of Library is not compatible with software but still. And what we can do as a software developer to fix this problem? Or as a end user.
Software changes. Version 0.5 will not have the same features as Version 0.9 most of the time. Features get added over time, features get removed over time and the interface of a library might change over time too.
As a software dev, the only thing you can do is keep the same API for ever, but that is not always feasible.
Hey, Thanks I have one more question. Is it possible to ship all required library with software?
It is, that’s what Windows does. It’s also possible to compile programs to not need external libraries and instead embed all they need. But both of these are bad ideas.
Imagine you install dolphin (the KDE file manager) It will need lots of KDE libraries, then you install Okular (KDE PDF reader) it will require lots of the same library. Extend that to the hundreds of programs that are installed on your computer and you’ll easily doubled the space used with no particular benefit since the package manager already takes care of updating the programs and libraries together. Not just that, but if every program came with it’s own libraries, if a bug/security flaw was found in one of the libraries each program would need to upgrade, and if one didn’t you might be susceptible to bugs/attacks through that program.
Thanks you so much for explanation.
That is possible indeed! For more context, you can look up “static linking vs dynamic linking”
Tldr: Static linking: all dependencies get baked into the final binary Dynamic linking: the binary searches for libraries in your system’s PATH and loads them dynamically at runtime
Thanks
Absolutely! That’s called static linking, as in the library is included in the executable. Most Rust programs are compiled that way.
Yea, That’s why I am learning Rust but I didn’t know it called Static Linking I think it just how Rust works LMAO. And Thanks again
No problem. Good luck with your rust journey, it’s imo the best programming language.
Doesn’t that mean that you have a lot of duplicate libraries when using Rust programs, even ones with the same version? That seems very inefficient
It’s true that boundaries get inflated as a result, but with today’s hard drives it’s not really a problem.
In addition to static linking, you can also load bundled dynamic libraries via RPATH, which is a section in an ELF binary where you can specify a custom library location. Assuming you’re using gcc, you could set the
LD_RUN_PATH
environment variable to specify the folder path containing your libraries. There may be a similar option for other compilers too, because in the end they’d be spitting out an ELF, and RPATH is part of the ELF spec.BUT I agree with what @Nibodhika@lemmy.world wrote - this is generally a bad idea. In addition to what they stated, a big issue could be the licensing - the license of your app may not be compatible with the license of the library. For instance, if the library is licensed under the GPL, then you have to ship your app under GPL as well - which you may or may not want. And if you’re using several different libraries, then you’ll have to verify each of their licenses and ensure that you’re not violating or conflicting with any of them.
Another issue is that the libraries you ship with your program may not be optimal for the user’s device or use case. For instance, a user may prefer libraries compiled for their particular CPU’s microarchitecture for best performance, and by forcing your own libraries, you’d be denying them that. That’s why it’s best left to the distro/user.
In saying that, you could ship your app as a Flatpak - that way you don’t have to worry about the versions of libraries on the user’s system or causing conflicts.
Thanks to let me know about Licensing issue.
Appimage might also be a way
To add some nuance, all features in v0.5.0 should still exist in v0.9.0 in the modern software landscape.
If v0.5.0 has features ABC and then one was then changed, under semantic versioning which most software follows these days then it should get a breaking change and would therefore get promoted to v1.0.0.
If ABC got a new feature D but ABC didn’t change, it would have been v0.6.0 instead. This system, when stuck to,helps immensely when upgrading packages.
When having a breaking change pre 1.0.0, I’d expect a minor version bump instead, as 1.0.0 signals that the project is stable or at least finished enough for use.
Because it’s not guaranteed that it’ll work. FOSS projects don’t run under strict managerial definitions where they have to maintain compatibility in all their APIs etc. They are developed freely. As such, you can’t really rely on full compatibility.
That’s the same on ANY platform, but windows is far worse because most apps ship a DLL and -never- update the damn thing. With Linux, it’s a little bit more transparent. (edit: unless you do the stupid shit and link statically, but again in the brave new world of Rust and Go having 500 Mb binaries for a 5 Kb program is acceptable)
Also, applications use the API/ABI of a particular library. Now, if the developers of the said library actually change something in the library’s behavior with an update, your app won’t work it no more unless you go and actually update your own code and find everything that’s broken.
So as you can understand, this is a maintenance burden. A lot of apps delegate this to a later time, or something that happens sometimes with FOSS is that the app goes unmaintained somewhat, or in some cases the app customizes the library so much, that you just can’t update that shit anymore. So you fix on a particular version of the library.
You sometimes can build software that will work with more than one version of a C library, but less and less software is being written that binds only to C libraries. The key topic you want to look up is probably “ABI stability”.
IMHO the answer is social, not technical:
Backwarts compatibility/legacy code is not fun, and so unless you throw a lot of money at the problem (RHEL), people don’t do it in their free time.
The best way to distribute a desktop app on Linux is to make it Win32 (and run it with WINE) … :-P (Perhaps Flatpak will change this.)