• 1 Post
  • 22 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle
  • If your computer is compromised to the point someone can read the key, read words 2-5 again.

    This is FUD. Even if Signal encrypted the local data, at the point someone can run a process on your system, there’s nothing to stop the attacker from adding a modified version of the Signal app, updating your path, shortcuts, etc to point to the malicious version, and waiting for you to supply the pin/password. They can siphon the data off then.

    Anyone with actual need for concern should probably only be using their phone anyway, because it cuts your attack surface by half (more than half if you have multiple computers), and you can expect to be in possession/control of your phone at all times, vs a computer that is often left unattended.


  • it doesn’t unravel the underlying complexity of what it does… these alternative syntaxes tend to make some easy cases easy, but they have no idea what to do with more complicated cases

    This can be said of any higher-level language, or API. There is always a cost to abstraction. Binary -> Assembly -> C -> Python. As you go up that chain, many things get easier, but some things become impossible. You always have the option to drop down, though, and these regex tools are no different. Software development, sysops, devops, etc are full of compromises like this.


  • You are conflating the concept and the implementation. PFS is a feature of network protocols, and they are a frequently cited example, but they are not part of the definition. From your second link, the definition is:

    Perfect forward secrecy (PFS for short) refers to the property of key-exchange protocols (Key Exchange) by which the exposure of long-term keying material, used in the protocol to authenticate and negotiate session keys, does not compromise the secrecy of session keys established before the exposure.

    And your third link:

    Forward secrecy (FS): a key management scheme ensures forward secrecy if an adversary that corrupts (by a node compromise) a set of keys at some generations j and prior to generation i, where 1 ≤ j < i, is not able to use these keys to compute a usable key at a generation k where k ≥ i.

    Neither of these mention networks, only protocols/schemes, which are concepts. Cryptography exists outside networks, and outside computer science (even if that is where it finds the most use).

    Funnily enough, these two definitions (which I’ll remind you, come from the links you provided) are directly contradictory. The first describes protecting information “before the exposure” (i.e. past messages), while the second says a compromise at j cannot be used to compromise k, where k is strictly greater than j (i.e. a future message). So much for the hard and fast definition from “professional cryptographers.”

    Now, what you’ve described with matrix sounds like it is having a client send old messages to the server, which are then sent to another client. The fact the content is old is irrelevant - the content is sent in new messages, using new sessions, with new keys. This is different from what I described, about a new client downloading old messages (encrypted with the original key) from the server. In any case, both of these scenarios create an attack vector through which an adversary can get all of your old messages, which, whether you believe violates PFS by your chosen definition or not, does defeat its purpose (perhaps you prefer this phrasing to “break” or “breach”).

    This seems to align with what you said in your first response, that Signal’s goal is to “limit privacy leaks,” which I agree with. I’m not sure why we’ve gotten so hung up on semantics.

    I wasn’t going to address this, but since you brought it up twice, running a forum is not much of a credential. Anyone can start a forum. There are forums for vaxxers and forums for antivaxxers, forums for atheists and forums for believers, forums for vegans and forums for carnivores. Not everyone running these forums is an expert, and necessarily, not all of them are “right.” This isn’t to say you don’t have any knowledge of the subject matter, only that running a forum isn’t proof you do.

    If you’d like to reply, you may have the last word.









  • This is not entirely correct. Messages are stored on their servers temporarily (last I saw, for up to 30 days), so that even if your device is offline for a while, you still get all your messages.

    In theory, you could have messages waiting in your queue for device A, when you add device B, but device B will still not get the messages, even though the encrypted message is still on their servers.

    This is because messages are encrypted per device, rather than per user. So if you have a friend who uses a phone and computer, and you also use a phone and computer, the client sending the message encrypts it three times, and sends each encrypted copy to the server. Each client then pulls its copy, and decrypts it. If a device does not exist when the message is encrypted and sent, it is never encrypted for that device, so that new device cannot pull the message down and decrypt it.

    For more details: https://signal.org/docs/specifications/sesame/


  • “Desktop publishing” is the category of software you want. I’ve not used it, but I believe Scribus is the standard FOSS tool for this. If you want a simple graphical way to make your album, this is the way.

    Many people have metnioned LaTex - I would not recommend it for this purpose. LaTex, while powerful, will have a steep learning curve, and isn’t really made for artistic tasks - its purpose is for writing technical papers. From literally the first two sentences on the project site:

    LaTeX is a high-quality typesetting system; it includes features designed for the production of technical and scientific documentation. LaTeX is the de facto standard for the communication and publication of scientific documents.

    It’s probably possible to make a beautiful photo album with LaTex, but without a lot of work, it’s more likely to come out looking like a calculator manual.






  • I noted in another comment that SearXNG can’t do anything about the trackers that your browser can’t do, and solving this at the browser level is a much better solution, because it protects you everywhere, rather than just on the search engine.

    Routing over Tor is similar. Yes, you can route the search from your SearXNG instance to Google (or whatever upstream engine) over Tor, and hide your identity from Google. But then you click a link, and your IP connects to the IP of whatever site the results link to, and your ISP sees that. Knowing where you land can tell your ISP a lot about what you searched for. And the site you connected to knows your IP, so they get even more information - they know every action you took on the site, and everything you viewed. If you want to protect all of that, you should just use Tor on your computer, and protect every connection.

    This is the same argument for using Signal vs WhatsApp - yes, in WhatsApp the conversation may be E2E encrypted, but the metadata about who you’re chatting with, for how long, etc is all still very valuable to Meta.

    To reiterate/clarify what I’ve said elsewhere, I’m not making the case that people shouldn’t use SearXNG at all, only that their privacy claims are overstated, and if your goal is privacy, all the levels of security you would apply to SearXNG should be applied at your device level: Use a browser/extension to block trackers, use Tor to protect all your traffic, etc.


  • They are explicitly trying to move away from Google, and are looking for a new option because their current solution is forcing them to turn off ad-blocking. Sounds to me like they are looking for a private option. Plus, given the forum in which we are having the discussion (Lemmy), even if OP is not specifically concerned with privacy, it seems likely other users are.

    As for cookies, searxng can’t do any more than your browser (possibly with extensions) can do, and relying on your browser here is a much better solution, because it protects you on all sites, rather than just on your chosen search engine.

    “Trash mountain” results is a whole separate issue - you can certainly tune the results to your liking. But literally the second sentence of their GitHub headline is touting no tracking or profiling, so it seems worth bringing attention to the limitations, and that’s all I’m trying to do here.



  • It looks like a few people are recommending this, so just a quick note in case people are unaware:

    If you want to avoid being tracked, this is not a good solution. Searxng is a meta search engine, meaning it is effectively a proxy: you search on Searxng, it searches multiple sites and sends all the results back to you. If you use a public instance, you may be protected from the actual search engine*, because many people will use the same instance, and your queries will be mixed in with all of them. If you self host, however, all the searches will be your own - there is then no difference between using Searxng and just going to the site yourself.

    *The caveat with using the public instances is while you may be protected from the upstream engine, you have to trust the admins - nothing stops them from tracking you themselves (or passing your data on).

    Despite the claims in their docs, I would not consider this a privacy tool. If you are just looking for a good search engine, this may work, and it gives you flexibility and power to tune it yourself. But it’s probably not going to do anything good for your privacy, above and beyond what you can get from other meta search engines like Startpage and DuckDuckGo, or other “private” search engines like Brave.


  • it has its flaws.

    Yep yep. I was aware of some of what you pointed out - I think this might be a “perfect is the enemy of good” scenario, though. GitHub alone accounts for over 84% (based on the awesome-selfhosted-data repo):

    $ grep -r 'source_code_url' | cut -d ' ' -f 2 | cut -d '/' -f 3 | sort | uniq -c | sort -rn | head -n 15
       1068 github.com
         36 gitlab.com
          7 git.mills.io
          6 sourceforge.net
          6 framagit.org
          4 www.atlassian.com
          4 codeberg.org
          3 git.drupalcode.org
          3 git.cloudron.io
          2 repos.goffi.org
          2 git.tt-rss.org
          2 git.sr.ht
          2 cvsweb.openbsd.org
          1 yetishare.com
          1 www.wiz.cn
    
    $ python -c "print($(grep -r 'source_code_url' . | grep github.com | wc -l) / $(ls -1 | wc -l))"
    0.8422712933753943
    

    Adding in gitlab gets you to 87%:

    $ python -c "print($(grep -r 'source_code_url' . | grep -i -e github.com -e gitlab.com | wc -l) / $(ls -1 | wc -l))" 0.8706624605678234

    Also popularity != quality.

    True, but a thriving community generally means more resources, guides, etc, which can be important, especially for self-hosted solutions.

    In any case, the project is great, and much appreciated. Additionally, the enriched html version looks fantastic, and exposes most of the metadata* I’d want to see, regardless of how it’s sorted.

    *One other item to track, that I thought about after making my previous comment - number of contributors. It gives an additional data point on the size of the community, as well as an idea of how many people can be hit by busses before the continued development of the project gets called into question.