• 0 Posts
  • 130 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle




  • Personally I’ve used Western Digital, Seagate, and PNY drives with no failures. Stay away from anything HP branded; they don’t actually produce drives but rather rebadge other failure-prone models and make it damn near impossible to claim any warranty.

    I’ve had a Samsung Evo drive fail on me, but warranty was pretty easy. I’ve also had a PNY 2.5" ssd that has never failed on me, but I did break the connector off accidentally. Warranty was actually ridiculously easy on that, despite it being entirely user error.

    If data is mission critical, it’s worth shelling out extra; stay away from any cheap brands (HP, SanDisk, etc) and opt for the higher end models in reputable brands (Eg WD Red, Purple, and Gold vs Green and Blue, or Seagate Ironwolf or Firecuda)

    These are my own personal experiences. Others will have better/worse experiences and I encourage you to seek out others’ experiences and options, as well as others to add their own











  • “While we are making this change to ensure users’ expectations regarding a community’s access do not suddenly change, protest is allowed on Reddit,” writes Nestler. “We want to hear from you when you think Reddit is making decisions that are not in your communities’ best interests. But if a protest crosses the line into harming redditors and Reddit, we’ll step in.”

    Yall have very clearly demonstrated that you do not care about the communities best interest, and you have no interest in hearing what we think. Fuck Spez and good riddance to reddit




  • That’s alright, I’ll do my best to walk you through it.

    Your drive contains multiple partitions (/dev/sda1 through /dev/sda3).
    One of these drives is going to be your EFI partition. This is what your system can read before linux boots, your BIOS can’t understand ext4 / btrfs / etc, but it can understand fat32.
    If you run lsblk -no FSTYPE /dev/sda1 it should return vfat if that’s your EFI partition. That’s what we’re going to mount to /mnt/boot/efi

    I’m assuming that /dev/sda3 is your data partition, e.g. where your linux install is. You can find the filesystem format the same way as your EFI partition. Edit: After determining which partition is which, you’re going to want to mount the root partition, and then the EFI partition
    mount /dev/sda3 /mnt
    mount /dev/sda1 /mnt/boot/efi

    Unix systems have theology of “everything is a file”, all devices and system interfaces are mounted as files. As such, to be able to properly chroot into an offline install, we need to make binds from our running system to the offline system. That’s what’s achieved by running for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done
    This is just a simple loop that mounts /dev, /dev/pts, /proc, /sys, and /run to your offline install. You’re going to want to either add /sys/firmware/efi/efivars to that list, or mount it (with -B, which is shorthand for --bind, not a normal mount).

    Once you’ve done this, you should be able to successfully chroot into /mnt (or /mnt/root if running btrfs)
    At this point, you should be able to run your grub repair commands.


  • I’m doing my morning scroll before I start my day, so I can’t delve too deep, but this is the article I always reference when I have to do repairs

    https://askubuntu.com/a/831241

    #1 thing I noticed in your image is that lsblk only shows you partitions, and doesn’t mount them. You probably want /dev/sda3 mounted at /mnt

    The only thing from the article you want to modify is using mount -B /sys/firmware/efi/efivars /mnt/sys/efi/efivars, I believe the functionality changed since that article was written and that’s what worked for me

    Additionally, if you drive is formatted as btrfs instead of ext4, once you mount your drive your root will most likely be at /mnt/admin or similar. Mount subdirectories to that folder instead of /mnt

    If you have questions lmk and I’ll get back to you at some point today