• 0 Posts
  • 19 Comments
Joined 10 months ago
cake
Cake day: December 27th, 2023

help-circle
  • you’re welcome.

    what i’ld suggest… a general rule that i like to always follow is to use a test system for everything new. but that does not need to be a full separate system every time.

    lets say you have your mailbox and want to try getting new mails from it using fetchmail. first you can use uidl mechanisms to only fefch every mail once and besides that leave them all on the server, but i like it a bit more secure: create a second email adress/account at your mail providers service only for testing. thus you can do whatever you like to to test the mechanisms only without even touching your real inbox (maybe even fill it up with large emails and look how the system reacts, i once had an email account with a cheap provider that deadlocked the inboxes when full…). then when everything is as you want it, switch the account and password (or create another config file for fetchmail) and your’re done. every change (not only fetchmail things) could go tested this way before going live with the changes. filtering could be done with procmail for example, but when the mda that is called by procmail somehow exits with success when the email really isn’t delivered, then the email might get lost forever depending on the settings of course. so fiddling with new stuff always carries the risk of not fiddling correctly ;-)

    have fun !


  • Its possible to tell your mta (like postfix) to use another mta for all mails, or only some domains etc, so using a third party to play the internet facing service then getting the mails by fetchmail, storing them in a dovecot server is easy. on the sending part you could use your standard email client (i.e. thunderbird on pc or k9-mail on smartphone) to send it to your postfix instance that also sits on the server hosting your dovecot service. the mta there takes the mail and delivers it by rules which could just be using the mta of your freemailer using username/password of your account for all outgoing emails. i am doing this but the “external” mail system are my servers as well, i just don’t want emails to stay too long on VMs in the datacenter where i have no access to the physical disks in case something goes wrong.

    a raspberry pi is sufficient for such a aetup (i am using a pi4 currently but for emails only i’ld say a 3 or older would do too), adding a disk via usb makes storage huge and cheap then, i use two usb ssd’s in a raid1 for storage… that server could be only accessible through vpn if you whish, depending on your skills and needs (i mainly use ssl client certificates that are supported by k9mail and thunderbird so it fits seamless to be connected through a haproxy that authenticates these before proxying the plain connection to the pi) clients like thunderbird can offline-store all emails (configure download-or-not per imap folder) making searches easy and quick while my k9 client can search locally or on the server if needed.

    maybe adjust maximum mail size of your own mta to exactly match (or slightly less) that of the freemailer you use to prevent surprises of big but later then unsent emails.

    its possible to have a nextcloud instance on that same pi that acts as an email web mailer just in case of (i really dont need it, but i’ve set this up anyway). nextcloud is also great for syncing/backup files pictures, contacts notes todo lists and calendar of your phone (where i use davx5 opentasks and foldersync for). there are other webmailers available but installing /using nextcloud is not a too bad idea either ;-)

    i suggest also setting up some automatic offsite backup with snapshots of that pi then to cover emails and the setup and its configs ;-)


  • First contact was on the here-named eta-carinae system, we did a holiday tour there long ago and heared about earth from a scientist that rescued a human instead of just studying and thus could not leave him there with his memories about him. the human was talking about star trek, its similarities and real differences all the time. he already spoke fluently in standard Sjesh/sound w/o any interfaces so we listened directly to his true mind. he even had a very worn out tng tshirt in his personal memory items box. i mean he really had used his memory items before! that made us curious and the rest is history. However he is now back here, as we managed to arrange his behavioral training to hide his experiences well, he passed all the tests and got his transport back, but with his biologic cells clock reset to his 20th to compensate the decades he lost out there a little bit. it is possible he could become an ambassador for earth one day, but it looks unlikely that he would want that given the circumstances here, a task he always compares with the mytholigical boulder of Sisyphus (that really never existed physically) whenever he is asked about his opportunity.

    just kidding, first contact with TNG was in school, other kids talked about the first episode. i could not watch it at home and also had other problems to fix at that time so i missed a lot of the start of it :-/

    however i am trying to train myself for writing in general as i have ideas for a longer story (but not within the trek universe) and as the above text came to my mind i just wrote it and hope you don’t find it too misplaced here or badly written… however any feedback is welcome.


  • we need an adblockers blockers blocker

    no, what is needed is an app that helps track who benefits from thr apps that annly you most:

    • ownership of companies pushing annoying ads
    • management of companies pushing annoying ads
    • find the connection between those and the products you maybe want to buy in shops or in internet before you buy, then instead of buying, let the app send the seller a message that you did not buy because of that connection.
    • do this in numbers with lots of people and see what happens to the advertising jungle

    the point is NOT buying because of advertising AND let them know it, so they can learn to improve themselves.

    they wanted your data? let them have it the way you want them to.

    same with any platform. ask the creator of your choice to also publish using patreon and you’ll become a member then, getting the content free of ads. better more directly pay who does the actual work, not all the big tech harvesting all the benefit inbetween.

    so what maybe is needed here could be a free or even self-hostable platform that also allows payed subscriptions.


  • smb@lemmy.mltoMildly Infuriating@lemmy.worldWhat fresh fuckery is this? 🤦
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    3 months ago

    really, yt stopped to play sound on the website for me (beeing logged in), there is a banner to “activate sound” but it always disappears unclickable fast, so i searched and found webtube, an app that basically loads their website, but has one feature youtube has not: “sound” *lol

    now i wonder how many of these apps really are “third” party apps and not really theirs only masked as third party for getting that gain of trust all the “others” get when it comes to big techs with their very own “public” crime records …

    would be too easy for them to create some small apps, act as if those were 3rd party software but harvest that spyoil (of the 21 century) anyway.


  • i once had to look at a firefall appliance cluster, (discovered, it could not do any failover in its current state but somehow the decider was ok with that) but when looking at its logs, i discovered an rsh and rcp access from an ip address that belonged to a military organisation from a different continent. i had to make it a security incident. later the vendor said that this was only the cluster internal routing (over the dedicated crosslink), used for synchronisation (the thing that did not work) and was only used by a separate routing table only for clustersync and that could never be used for real traffic. but why not simply use an ip that you “own” by yourself and PTR it with a hint about what this ip is used for? instead of customers scratching their head why military still uses rcp and rsh. i guess because no company reads firewall logs anyway XD

    someone elses ip? yes! becuase they’ll never find out !!1!

    i really appreciate that ipv6 has things like a dedicated documentation address range and that fc00:/7 is nicely short.


  • ipv6 in companies… ipv6 is not hard, but for internal networking no company (really) “needs” more than rfc1918 address space. thus any decision in that direction is always “less” needed than any bonus for (da)magement personnel is crucial for the whole companies survival…

    for companies services to be reachable from outside/ipv6 mostly “only” the loadbalancers/revproxies etc need to be ipv6 ready but … this i.e. also produces logs that possibly break decades old regexes that no one understands any more (as the good engineers left due to too many boni payed to damagement personnel) while other access/deny rules that could break or worse let through where they should block (remember that 192.168. could the local part of ipv6 IF sone genious used a matching mech that treats the dot “.” as a wildcard as overpayed damagement personnel made them rush too fast), could be hidden “somewhere”. altogether technical debt is a huge blocker for everything, especially company growth, and if no customer “demands” ipv6, then it stays on the damagement personnels list as “fulfilling the whishes of engineers to keep them happy” instead of on the always deleted “cleaning up technical debt caused by damagement personnel” list.

    setting up firewalls for ipv6 is quite easy and if you go the finegrained “whitelisted or drop/block” approach from the beginning it might take a bit for ipv6 specials to be known to you, but the much bigger thing is IMHO the then current state of firewall rules. and who knows every existing rule? what rules should be removed already and must not be ported to ipv6? usually firewalls and their rules are a big mess due to … again too many boni payed to damagement personnel, hindering the company from the needed steps forward…

    ipv6 adoption is slow for reasons that are driving huge cars that in turn speed up other problems ;-|


  • maybe start with an adjustable setup:

    • rent a cheap vm, i got one for 1€/month (for the first year,cancel monthly) from ovh currently
    • setup 3 openvpn instances to redirect all routes through the tunnel, one with ipv4 only, one with ipv6 only and one with both
    • setup the client on your mobile phone and your laptop both with all three vpns to choose from
    • have the option to choose now and try out ipv6, standalone or dualstack depending on what vpn you switch on
    • use this setup to blame services that don’t support ipv6 yet or maybe are broken with dualstack 🤣
    • rise from under-the-stone (disabling ipv6 only) to in-sunlight (to a well-above-industry-standart-level !!! “quick” new network technologies adopting “genious”) 🤣
    • improve your openvpn setup from above to be reachable “by” ipv6 too if you haven’t done it from the beginning, done: reach the pro-level of the-late-adopter-noob-group

    (if you want, ask for config snippets)

    btw i prefer to wait for ipv8😁 before “demanding” ipv6 from services i use 🤣


  • smb@lemmy.mltoProgrammer Humor@programming.dev"prompt engineering"
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    7 months ago

    that a moderately clever human can talk them into doing pretty much anything.

    besides that LLMs are good enough to let moderately clever humans believe that they actually got an answer that was more than guessing and probabilities based on millions of trolls messages, advertising lies, fantasy books, scammer webpages, fake news, astroturfing, propaganda of the past centuries including the current made up narratives and a quite long prompt invisible to that human.

    cheerio!




  • after looking at the ticket myself i think the relevant things IMHO are:

    • a person filed a bug report due to not seeing what changes in the new version caused a different behaviour
    • that person seemed pushy, first telling the dev where patches should be sent to (is this normal? i guess not, better let the dev decide where patches go or -in this case- if patches are needed at all), then coming up with ceo style wordings (highly visible, customer experience of untested but nevertheless released to live product is bad due to this (implicitly “your”) bug)
    • pushiness is counterparted by “please help”
    • free-of-charge consulting was given by the one pointing to changes likely beeing visible in changelog (i did not look though) but nevertheless it was pointed out to the parameter which assumes RTFM (if docs were indeed updated) that a default value had changed and its behavior could be adjusted by using that given parameter.

    up to there that person -belonging to M$ or not (don’t know and don’t care) - behaved IMHO rather correctly, submitting a bug report for something that looked like it, beeing a bit pushy, wanting priority, trying to command, but still formally at least “asking” for help. but at that point the “bug” seemed to have been resolved to me, it looks like the person was either not reading the manual and changelog, or maybe manual or changelog lacks that information, but that was not stated later so i guess that person just did not read neither changelog nor manual.

    instead - so it seems to me - that person demanded immediate and free-of-charge consulting of how exactly the switch should be used to work in that specific use case which would imply the dev looks into the example files, maybe try and error for himself just so that that person does not need to neither invest the time to learn use the software the company depends on, nor hire a consultant to do the work.

    i think (intentional or not) abusing a bug tracker for demanding free-of-charge enduser consulting by a dev is a bad idea unless one wants(!) to actively waste the precious time of the dev (that high priority ticket for the highly visible already live released product relies on) or has even worse intentions like:

    • uploading example files with exploits in them, pointing to the exact versions that include the RCE vulnerability that sample file would abuse and the “bug” was just reported cause it fits the version needed for exploitation and pressure was made by naming big companies to maybe make the dev run a vulnerable version on it on his workstation before someone finds out, so that an upstream attack could take place directly on the devs workstation. but thats just creating a fictive worst case scenario.

    to me this clearly looks like a “different culture” problem. in companies where all are paid from basically the same employer, abusing an internal bug tracker for quick internal consulting would probably be seen as just normal and best practice because the dev who knows and is actually working on the code is likely to have the solution right at hand without thinking much while the other person, who is in charge of quick fixing an untested but already live to customers released product, does not have sufficient knowledge of how the thing works and neither is given the time to learn or at least read changelogs and manual nor the time to learn the basics of general upstream software culture.

    in companies the https://en.m.wikipedia.org/wiki/Peter_principle could be a problem that imho likely leads to such situations, but this is a guess as i know nobody working there and i am not convinced that that person is in fact working for the named company, instead in that ticket shows up a name that i would assume to be a reason to not rely too much about names in the tickes system always be realnames.

    the behaviour that causes the bad postings here in this lemmy thread is to me likely “just” a culture problem and that person would be advised well if told to learn to know the open source culture, netiquette etc and learn to behave differently depending on to who, where and how they communicate with, what to expect and how to interact productively to the benefit of their upstream too, which is the “real price” all so often in open source. it could be that in the company that rolled out the untested product it is seen to be best practice to immediately grab the dev who knows a software and let him help you with whatever you can’t on your own (for whatever reason) whenever you manage to encounter one =]

    i assume the pushyness could likely come from their hierarchy. it is not uncommon that so called leaders just create pressure to below because they maybe have no clue of the thing and not want to gain that clue, but that i cannot know, its just a picture in my head. but in a company that seems to put pressure on releasing an untested product to customers i guess i am not too wrong with the direction of that assumption. what the company maybe should learn is that releasing untested and/or unfinished products to live is a bad habit. but i also assume that if they wanted to learn that, they maybe would have started to learn it like roundabout 2 decades ago. again, i do not know for what company that person works -or worked- for, could be just a subcontractor of the named one too. and also could be that the pushyness (telling its for m$, that its live, has impact to customers etc) was really decided by someone up the latter who would have literally no experience at all on how to handle upstream in such situations. hierarchies can be very dysfunctional sometimes and in companies saying “impact to customers” sometimes is likely the same as saying “boss says asap”.

    what i would suggest their customers (those who were given a beta version as production ready) should learn is that when someone (maybe) continously delivers differently than advertised, that after some few times of experiencing this, the customer would be insane when assuming that that bad behaviour would vanish by pure hope + throwing money into hands where money maybe already didn’t help improving their habits for assumingly decades. And when feeding everhungry with money does not resolve the problems, that maybe looking towards those who do have a non-money-dependant grown-up culture could actually provide more really usable products. Evaluation of new solutions (which one would really be best for a specific usecase i.e.) or testing new versions before really rolling them out to live might be costly especially when done throughout, but can provide a lot of really high valueable stability otherwise unreachable by those who only throw money at shareholders of brands and maybe rely on pure hope for all of the rest. Especially when that brand maybe even officially anounced to remove their testing department ;+) what should a sane and educated customer expect then ? but again to note, i do not know which companies really are involved and how exactly. from the ticket i do not see which company that person directly works for, nor if the claim that m$ is involved is a fact or just a false claim in hope for quicker help (companies already too desperate to test products before live could be desperate again in need for even more help when their bad habits piled up too long and begin falling on their heads)


  • the xz vulnerability was done through a superflous dependency to systemd, xz was only the library that was abused to use systemd’s superflous dependency hell. sshd does not use xz, but systemd does depend on it. sshd does not need systemd, but it was attacked through its library dependency.

    we should remove any pointless dependencies that can be found on a system to prevent such attacks in future by reducing dependency based attack vectors to a minimum.

    also we should increase the overall level of privilege separation where systemd is a good bad example, just look at the init binary and its capability zoo.

    The company who hired “the” systemd developer should IMHO start to really fix these issues !

    so please hold your “$they have fixed it” back until the the root cause that made the xz dependency level attack possible in the first place has been really fixed =)

    Of course pointing it out was good, but now the root cause should be fixed, not just a random symptom that happened to be the first visible atrack that used this attack vector introduced by systemd.



  • i am happy to have a raspberry pi setup connected to a VLAN switch, internet is behind a modem (like bridged mode) connected with ethernet to one switchport while the raspi routes everything through one tagged physical GB switchport. the setup works fine with two raspi’s and failover without tcp disconnections during an actual failover, only few seconds delay when that happens, so basically voip calls recover after seconds, streaming is not affected, while in a game a second off might be too much already, however as such hardware failures happen rarely, i am running only one of them anyway.

    for firewall i am using shorewall, while for some special routing i also use unbound dns resolver (one can easily configure static results for any record) and haproxy with sni inspection for specific https routing for the rather specialized setup i have.

    my wifi is done by an openwrt but i only use it for having separate wifis bridged to their own vlans.

    thus this setup allows for multi-zone networks at home like a wifi for visitors with daily changing passwords and another fror chromecast or home automation, each with their own rules, hardware redundancy, special tweaking, everything that runs on gnu/linux is possible including pihole, wireguard, ddns solutions, traffic statistics, traffic shaping/QOS, traffic dumps or even SSL interception if you really want to import your own CA into your phone and see what data your phones apps (those that don’t use certificate pinning) are transfering when calling home, and much more.

    however regarding ddns it sometimes feels more safe and reliable to have a somehow reserved IP that would not change. some providers offer rather cheap tunnels for this purpose. i once had a free (ipv6) tunnel at hurricane electronic (besides another one for IPv4) but now i use VMs in data centers.

    i do not see any ready product to be that flexible. however to me the best ready router system seems to be openwrt, you are not bound to a hardware vendor, get security updates longer than with any commercial product, can 1:1 copy your config to a new device even if the hardware changes and has the possibility to add packages with special features to it.

    “openwrt” is IMHO the most flexible ready solution for longtime use. same as “pfsense” is also very worth looking at and has some similarities to openwrt while beeing different.


  • my 2 cents just in case…:

    A raid6 is not a replacement for backup ;-) i use rdiff-backup which is easy to use, stores only one full backup and all increments are to the past while it is only possible to delete the oldest increments (afaik no “merging”) i never needed anything else. The backup should be one off-site and another one offline to be synced once in a while manually. Make complete dumps (including triggers, etc) from databases before doing the backup ;-)

    i like to have a recreateable server setup, like setting it up manually, then putting everything i did into ansilbe, try to recreate a “spare” server using ansible and the backup, test everything and you can be sure you also have “documented” your setup to a good degree.

    for hardware i do not have much assumptions about performance (until it hits me), but an always-running in-house server should better safe power (i learned this the costly way). it is possible to turn cpu’s off and run only on one cpu with only a reduced freq in times without performance needs, that could help a bit, at least it would feel good to do so while turning cpu’s on again + set higher frequency is quick and can be easily scripted.

    hard drives: make sure you buy 24/7, they are usually way more hassle-free than the consumer grades and likely “only” cost double the price. i would always place the system on SSD but always as raid1 (not raid6), while the “other” could then maybe be a magnetic one set to write-mostly.

    as i do not buy “server” hardware for my home server, i always buy the components twice when i change something, so that i would have the spare parts ready at hand when i need it. running a server for 5+ years often ends up in not beeing able to buy the same again, and then you have to first search what you want, order, test, maybe send back as it might not fit… instable memory? mainboard released smoke signs? with spare parts at hand, a matter of minutes! only thing i am missing with my consumer grade home server hardware is ecc ram :-/

    for cooling i like to use a 12cm fan and only power it with 5v (instead of the 12v it wants) so that it runs smoothly slow and nearly as silent as a passive only cooling, but heat does not build up in the summer. do not forget to clean the dust once in a while… i never had a 5v powered 12V-12cm fan that had any problems with the bearings and i think one of them ran for over a decade. i think the 12volt fans last longer with 5v, but no warranty from me ;-)

    even with headless i like to have a quick way at hand to get to a console in case of network might not be working. i once used a serial cable and my notebook, then a small monitor/keyboard, now i use pikvm and could look to my servers physical console from my mobile phone (but would need ssl client certificate and TOTP to do so) but this involves network, i know XD

    you likely want smart monitoring and once in a while run memtest.

    for servers i also like to have some monitoring that could push a message to my phone somehow for some foreseeable conditions that i would like to handle manually.

    debsums, logcheck logwatch and fail2ban are also worth looking at depending on what you want.

    also after updating packages, have a look at lsof | egrep “DEL|deleted” to see what programs need a simple restart to really use libraries that have been updated. so reboots only for newer kernels.

    ok this is more than 2 cents, maybe 5. never mind

    hope these ideas help a bit




  • see, capitalism works!

    1. sell 10million packages each with missing 2% of contents.
    2. sell those 200000 extra packages with the contens you “saved” (no, not 204000 with again missing 2%, see below why)
    3. do not pay taxes on extra packages you sold as you can “proof” you sold all 10million paying those taxes.
    4. receive 200000 * price of package as personal taxfree extra income.
    5. write that one guy who complained about missing 8grams of pasta a sorry letter
    6. complain about time loss and costs writing a single sorry letter and pay paper and stamp out of “marketing” campaigns budget
    7. complain about the world not trusting companies
    8. complain about people using badly adjusted scales
    9. complain about someone selling none-genuine products on market with your logo faked.
    10. assume that those packages with missing contents could be just those fake products.

    done a full circle.

    but… kitchen scales are really bad. most other scales as well. i tried to find (electronic) scales that are actually precise:

    for low weights i ended up with a scale with 0.01 gram precison, but it could only measure a bit more than 100grams (and also included a 100gr calibration weight)

    for higher weigths i only found a scale for post offices measuring packages. the only thing the vendor “really” promised was that multiple times measuring the same thing would be showing the same weight (nope the best “affordable” scale on the market here did not promise to measure correctly, just to measure over and over the same…)

    i guess the options for accurate measuring of more than 100gr are:

    • old style mechanical scales daily adjusted
    • high priced industry/laboratory scales with warranties

    fun fact:

    after i bought that 0.01gr precicion scale, amazon showed me small plastic clip bags with green leaf signs on it as “recommended” products for month, while i used the scale to mix just small amounts of 2-component epoxy resin in projects.