Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HO
Posts
1
Comments
16
Joined
2 yr. ago

  • Actually I agree. I guess I was just still annoyed after reading just previously about how llms are somehow not neural networks, and in fact not machine learning at all...

    Btw, you can absolutely finetune llms on classical regression problems if you have the required data (and care more about prediction quality than statistical guarantees.) The resulting regressors are often quite good.

  • I will admit didn't check because it was late and the article failed to load. I just remember reading several papers 1-2years ago on things like cancer-cell segmentation where the 'classical' UNet architecture was beaten by either pure transformers, or unets with added attention gates on all horizontal connections.

  • Those models will almost certainly be essentially the same transformer architecture as any of the llms use; simply because they beat most other architectures in almost any field people have tried them. An llm is, after all, just classifier with an unusually large set of classes (all possible tokens) which gets applied repeatedly

  • Because, while Switzerland is not part of the EU, it follows many of its regulations. Maybe even most of them.

    In this particular case, I happen to know that the inofficial rule is indeed to have burner phones for travel into the us in some cases. But you're never supposed to have unencrypted data on your phone or laptop in any case.

  •  R
        
    > binom.test(11,n=24, alternative = "two.sided")
    
        Exact binomial test
    
    data:  11 and 24
    number of successes = 11, number of trials = 24, p-value = 0.8388
    alternative hypothesis: true probability of success is not equal to 0.5
    95 percent confidence interval:
     0.2555302 0.6717919
    sample estimates:
    probability of success 
                 0.4583333 
    
      

    Probably not. Or at least we can't conclude that from the data. ¯(ツ)_/¯

  • I have yet to meet a single logician, american or otherwise, who would use the definition without 0.

    That said, it seems to depend on the field. I think I've had this discussion with a friend working in analysis.

  • But the vector space of (all) real functions is a completely different beast from the space of computable functions on finite-precision numbers. If you restrict the equality of these functions to their extension,

    defined as f = g iff forall x\in R: f(x)=g(x),

    then that vector space appears to be not only finite dimensional, but in fact finite. Otherwise you probably get a countably infinite dimensional vector space indexed by lambda terms (or whatever formalism you prefer.) But nothing like the space which contains vectors like

    F_{x_0}(x) := (1 if x = x_0; 0 otherwise)

    where x_0 is uncomputable.

  • Depends on the kind of blur. Some kinds can indeed be almost perfectly removed if you know the used blurring function, others are destructive. But, yes, don't take that chance. Always delete/paint over sensitive information.

    Source: we had to do just that in a course I took a long time ago.

  • Mir hei zum glück üses jährleche alpweekend vo letscht Wuche no chönne uf Disi schiebe. :)

    Di letschte drü Täg isch ja eigentlech sehr guet gsi, und ih dene paar Stund wo gwitteret hett, heimer zumindescht äh sehr schöni Ussicht gha.

  • pics @lemmy.world

    Rain over Vierwaldstättersee