Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HE
Posts
0
Comments
269
Joined
5 mo. ago

  • You can not run pterodactyl on cloudflare.

    For pterodactyl or any gaming server, you need a real server with a real operating system that you get full control over.

    Instead look for cheap VPS options in your country. Make sure their specs are good enough for a minecraft server. It will cost ~ 5-10$ per month. Then:

    • Point the domain from cloudflare to that VPS by changing the dns records in cloudflare
    • Install some linux on it like debian.
    • Get into that linux by using ssh.
    • Then follow the installation instructions on the pterodactyl website. https://pterodactyl.io/panel/1.0/getting_started.html
  • Sounds like you are in head over heels.

    Pterodactyl has a discord, why don't you go there for dedicated support?

    Regardless of where you ask - if you want help you should provide detailed information. Tell exactly what commands you entered, from start to finish, not skipping anything and provide the outputs that you've gotten, especially the errors.

  • Good input, thank you.


    As far as I know, none of them had random false data so I’m not sure why you would think that?

    You can use topic B as an illustration for topic A, even if topic B does not directly contain topic A. For example: (during a chess game analysis) "Moving the knight in front of the bishop is like a punch in the face from mike tyson."


    There are probably better examples of more complex algorithms that work on data collected online for various goals. When developing those, a problem that naturaly comes up would be filtering out garbage. Do you think it is absolutely infeasable to implement one that would detect adnauseum specifically?

  • https://github.com/abrahamjuliot/creepjs

    This illustrates lots of techniques and how to implement them.

    The most interesting to me is "lie" detection. If your browser attempts to give some false data, like when using the chameleon plugin, there are ways to verify a lot of it with javascript.

    But check out the readme for detailed info and try it yourself on the webpage to see what it can gather from your setup. https://abrahamjuliot.github.io/creepjs/

  • adnauseun (firefox add on that will click all ads in the background to obscure preference data)

    is what the top level comment said, so I went off this info. Thanks for explaining.

    Huh? No one in the Cambridge Analytica scandal was poisoning their data with irrelevant information.

    I didn't mean it like that.

    I meant it in an illustrative manner - the results of their mass tracking and psychological profiling analysis was so dystopian, that filtering out random false data seems trivial in comparison. I feel like a bachelor or master thesis would be enough to come up with a sufficiently precise method.

    In comparison to that it seems extremely complicated to algorithmically figure out what exact customized lie you have to tell to every single inidividual to manipulate them into behaving a certain way. That probably needed a larger team of smart people working together for many years.

    But ofc I may be wrong. Cheers

  • You are just moving the problem one step further, but that doesn't change anything (if I am wrong please correct me).

    You say it is ad behaviour + other data points.

    So the picture of me they have is: [other data] + doesn’t click ads like all the other adblocker people (which is accurate)

    Why would I want to change it to: [other data] + clicks ALL the ads like all the other adnauseum people (which is also accurate)

    How does adnauseum or not matter? I genuinely don't get it. It's the same [other data] in both cases. Whether you click on none of the ads or all of the ads can be detected.


    As a bonus, if adnauseum would click just a couple random ads, they would have a wrong assumption of my ad clicking behaviour.

    But if I click none of the ads they have no accurate assumption of my ad clicking behaviour either.

    Judging by incidents like the cambridge analytica scandal, the algorithms that analyze the data are sophisticated enough to differentiate your true interests, which are collected via other browsing behavious from your ad clicking behaviour if they contradict each other or when one of the two seems random.

  • On linux you can also use vmtouch to force cache the project files in RAM. This would speed up the first compilation of the day. On repeated compilations files that are read from disk would naturally be in the RAM cache already and it would not matter what drive you have.

    I have used this in the past when I had slow drives. I was forcing all necessary system libs (my IDE, jdk etc.) and my project files into RAM at the start of the day, before going on a 2min break to make coffee while it read all that stuff from a hdd. Which sped up the workflow in general, at least at the start of the day.

    It is not the same as a ramdisk, as the normal linux file cache writes back changes to the disk in the background.

    You can also pin your fastest core to a specific process, so that it gets no tasks except for the one you want it to do. But that seems more hassle than it's worth, so I never tried that.

  • But the picture of me they have is: doesn't click ads like all the other adblocker people (which is accurate)

    Why would I want to change it to: clicks ALL the ads like all the other adnauseum people (which is also accurate)

  • France, Mistral. There are also many others in US and worldwide.

    China is worse when comparing many other factors, but if ALL you care about is whether the model weights are os, then it is usually not decided by countries, but rather companies.

  • I just tested it on my instance. You can create a public share by setting the mode to "Write", which is accessible without logging in as a user (but with optional password).

    It works, but one does not see any files, not even the ones you uploaded yourself. So for example if you updated the file and need to re-upload it, there is no way for you to delete the previous one.

    You can also create a shared "virtual folder" that is seen by multiple users, and then you have fine grained control on a user basis (Users > burgermenu > edit > ACLs > Per directory permissions) there you can mix and match from a list of ~15 permissions. To upload anything to that virtual folder, you'll have to properly log in as a user.

    Hope either one of the ways works for you. Cheers

  • AAAAH sorry I misunderstood your point before. I thought users should not see files prior to their joining of that folder, but see files that come in after their joining.

    But you mean, users should only see files they upload, while an admin or so sees all files.

  • I don't know the details, but from just trying out both jShelter worked better for me.

    When a website didn't work, I know to toggle the toggles one by one until it works. With chameleon, I have tried clicking around through it's 7 tabs and 20 options, but failed to make the websites work. It also wasn't clear what it does with the useragent and such, as creepjs was still able to detect everything.

    So yeah, I recommend to try creepjs with either one and see what changes. Then you know which the better one is for privacy. And if it's neither, then you know that js is really fucking creepy, because it can use a lot of tricky ways to figure out stuff about your browser and os. The only way is to fully block js.