I like to code, garden and tinker

  • 0 Posts
  • 24 Comments
Joined 9 months ago
cake
Cake day: February 9th, 2024

help-circle


  • Just the act of refusing makes the act of seizing your phone legal or not. If you legally give them your phone by your own will, they are able to use all evidence they find in the courts. If you deny to give them your phone, and they seize it anyways and access it you have a valid path to throw the evidence they discover out as an illegal search and seizure of your property. I’m not a lawyer but that is the general thought process on denying them access to your property.

    Edit: Just want to say this mostly pretains to United States law and similar legal structures. This advice is not applicable everywhere and you should research your countries rights and legal protections.


  • I personally rather trust that my device isn’t able to be unlocked without my permission, rather than hope I am able to do some action to disable it in certain situations. The availability of such features is nice, but I would assume I would be incapable of performing such actions in the moment.

    My other thought is, how guilty is one perceived if they immediately attempt to lock their phones in such a matter, by a jury of their peers? I rather go the deniability route of I didn’t want to share my passcode vs I locked my phone down cause the cops were grabbing me.



  • The quotation marks did most of the lifting there, and it’s more of an anecdote of their own projections against themselves. They assume these “welfare queens” are driving around in high end cars and living luxurious lifestyles on the governments dollar, while they are the ones doing such. Sorry if there was any confusion. I agree with all the statements you have stated against Brett Farve though, they are the scum of the system they wish to project onto others.


  • Despite texts that show Favre sought to keep his receipt of the funds confidential, Favre has said he didn’t know the money came from federal funds intended for poor people. He’s paid the money back, but he’s being sued by the state of Mississippi for hundreds of thousands of dollars in interest that accrued on the money he received. Favre hasn’t been accused of any criminal wrongdoing.

    Source: (Yahoo News)

    So they could easily of have funded this themselves, but just rather steal public funds because “free money”? Sounds like a so called “welfare queen” to me.





  • I’ve never ran this program, but skimmed the documentation. You should be able to use the SHIORI_DIR (or a custom database table following those instructions) along with the -p argument for launching the web interface. A simple bash script that should work:

    export SHIORI_DIR=/path/to/shiori-data-dir
    shiori serve -p 8081
    

    To run multiple versions, I’d suggest setting up each instance as a service on your machine in case of reboots and/or crashes.

    Now for serving them, you have two options. The first is just let the users connect to the port directly, but this is generally not done for outward facing services (not that you can’t). The second is to setup a reverse proxy and route the traffic through subdomains or subpaths. Nginx is my go-to solution for this. I’ve also heard good things about Caddy. You’ll most likely have to use subdomains for this, as lots of apps assume they are the root path without some tinkering.

    Edit: Corrected incorrect cli arguments and a typo.




  • If you are expecting a more windows-like experience, I would suggest using Ubuntu or Kubuntu (or any other distro using Gnome/KDE), as these are much closer to a modern Windows GUI. With Ubuntu, I can use the default file manager (nautilus) and do Ctrl+F and filter files via *.ext, then select these files then cut and paste to a new folder (drag and drop does not seem to work from the search results). In Kubuntu, the search doesn’t recognize * as a wildcard in KDE’s file manager (dolphin) but does support drag/drop between windows.




  • In my humble opinion, we too are simply prediction machines. The main difference is how efficient our brains are at the large number of tasks given for it to accomplish for it’s size and energy requirements. No matter how complex the network is it is still a mapped outcome, just the number of factors weighed is extremely large and therefore gives a more intelligent response. You can see this with each increment in GPT models that use larger and larger parameter sets giving more and more intelligent answers. The fact we call these “hallucinations” shows how effective the predictive math is, and mimics humans abilities to just make things up on the fly when we don’t have a solid knowledge base to back it up.

    I do like this quote from the linked paper:

    As we will discuss, we find interesting evidence that simple sequence prediction can lead to the formation of a world model.

    That is to say, you don’t need complex solutions to map complex problems, you just need to have learned how you got there. It’s never purely random attempts at the problem, it’s always predictive attempts that try to map the expected outcomes and learn by getting it right and wrong.

    At this point, it seems fair to conclude the crow is relying on more than surface statistics. It evidently has formed a model of the game it has been hearing about, one that humans can understand and even use to steer the crow’s behavior.

    Which is to say that it has a predictive model based on previous games. This does not mean it must rigidly follow previous games, but that by playing many games it can see how each move affects the next. This is a simpler example because most board games are simpler than language with less possible outcomes. This isn’t to say that the crow is now a grand master at the game, but it has the reasoning to understand possible next moves, knows illegal moves, and knows to take the most advantageous move based on it’s current model. This is all predictive in nature, with “illegal” moves being assigned very low probability based on the learned behavior the moves never happen. This also allows possible unknown moves that a different model wouldn’t consider, but overall provides what is statistically the best move based on it’s model. This allows the crow to be placed into unknown situations, and give an intelligent response instead of just going “I don’t know this state, I’ll do something random”. This does not always mean this prediction is correct, but it will most likely be a valid and more than not statistically valid move.

    Overall, we aren’t totally sure what “intelligence” is, we are just an organism that has developed more and more capabilities to process information based on a need to survive. But getting down to it, we know neurons take inputs and give outputs based on what it perceives is the best response for the given input, and when enough of these are added together we get “intelligence”. In my opinion it’s still all predictive, its how the networks are trained and gain meaning from the data that isn’t always obvious. It’s only when you blindly accept any answer as correct that you run into these issues we’ve seen with ChatGPT.

    Thank you for sharing the article, it was an interesting article and helped clarify my understanding of the topic.


  • Disclaimer: I am not an AI researcher and just have an interest in AI. Everything I say is probably jibberish, and just my amateur understanding of the AI models used today.

    It seems these LLM’s use a clever trick in probability to give words meaning via statistic probabilities on their usage. So any result is just a statistical chance that those words will work well with each other. The number of indexes used to index “tokens” (in this case words), along with the number of layers in the AI model used to correlate usage of these tokens, seems to drastically increase the “intelligence” of these responses. This doesn’t seem able to overcome unknown circumstances, but does what AI does and relies on probability to answer the question. So in those cases, the next closest thing from the training data is substituted and considered “good enough”. I would think some confidence variable is what is truly needed for the current LLMs, as they seem capable of giving meaningful responses but give a “hallucinated” response when not enough data is available to answer the question.

    Overall, I would guess this is a limitation in the LLMs ability to map words to meaning. Imagine reading everything ever written, you’d probably be able to make intelligent responses to most questions. Now imagine you were asked something that you never read, but were expected to respond with an answer. This is what I personally feel these “hallucinations” are, or imo best approximations of the LLMs are. You can only answer what you know reliably, otherwise you are just guessing.



  • Looking over the github issues I couldn’t find a feature request for this, so it seems like it’s not being considered at the moment. You could make a suggestion over there, I do think this feature would be useful but it’s up to the devs to implement it.

    That being said, I wouldn’t count on this feature being implemented. This will only work on instances that obey the rules so some instances could remove this feature. When you look up your account on my instance (link here), it is up to my server to respect your option to hide your profile comments. This means the options have to be federated per-user, and adds a great deal of complexity to the system that can be easily thwarted by someone running an instance that chooses to not follow these rules.

    If your goal is to stop people looking up historical activities, it might be best to use multiple accounts and switch to new accounts every so often to break up your history. You could also delete your content but this is again up to each instance to respect the deletion request. It’s not an optimal solutions but depending on your goals it is the available solution.

    Edit: Also if your curious about the downvotes, it’s not the subject matter but your post violates Rule 3: Not regarding using or support for Lemmy.