Yonle

A self taught programmer & sysadmin from Indonesia that hosts stuff.

swf died at around year 2020, but then came back with a brand new fresh player, known as ruffle that tackles many problems that the original player have.

there are attempts on get it working and make it alive again, platforms that has it back then like newgrounds and kongregate try exactly that by putting ruffle onto their website. that works, however most swfs are buried than good nowadays. it's not even as instant as it was back then.

halfne miku studio

a miku clock swf playing on a post

on fedi, ability to play swf are very limited to such numbers of instance, especially the frontend being used for this process. as of the time i'm writing this (March 9th 2026), the only frontend that can play swfs nowadays are: – Pleroma-FE, – Sharkey, a misskey fork, – Waltuh.cyou's akkoma-fe fork

although the first two frontend can did the job, It's apparently obvious for some potential security risk.

same context, window for hell.

if you check how these two frontends did their job, the flow is apparently looking like this:

frontend -> ruffle.js -> ruffle.wasm -> flash

this is fine by itself. ruffle already has it's own sandbox set up per SWF, however, we still have some potential security window opened. for such, even though we have disabled allowScriptAccess, there are still risks for example, if the ruffle itself is vulnerable (which is rare), then it could basically alter or read the page for it's own purpose.

and we don't want that potential risks to happen.

my “secured” approach

if you remember, akkoma and pleroma started playing with CSP to tackle vulnerabilities from a media (eg, image, video, etc). Doesn't sound making any sense, but you can't blame them for protecting everyone. Akkoma for example has began to encourage everyone to switch their media domain to not be the same as their instance/frontend domain.

for the cross-domain approach, though, i would say, it's smart. you see, by cross-domain, that means that if anything happens on something that's loaded from different domain, anything on the root won't really impact much. such as, cookies, localStorage, etc.

and so, i did the same on the flash function.

akkoma-fe actually have a code for loading SWF, but it's mostly unused as they didn't bundle ruffle in the end (they never use it), and so, the current code is here, but it didn't work. it's also as vulnerable as what i said above.

so, my approach is now this:

akkoma-fe (has csp) -> iframe (with sandbox) -> simpleWebFlashViewer (different domain, also has it's own csp) -> ruffle.js -> ruffle.wasm -> swf

you can say, this is way more unnecessary than done, i agree. however, it's on purpose as i was focusing a lot on the security, so i had to make it this way.

for this job alone, modifying akkoma-fe alone wasn't enough, as i also had to alter akkoma-be to adjust it's CSP header so the iframe would work.

simpleWebFlashViewer is a small thing that i made as the swf loader, paired with ruffle. so all akkoma-fe need to do is basically iframe and forget about it. i also documented how you should apply the CSP on simpleWebFlashViewer despite it being a simple static page. It's highly recommended that you do that

this approach, despite being overkill, actually able to destroy the ruffle player properly with no potential of memory leakage. for such, when you press the stop button, it removes the iframe from the frontend, immediately free the memory usage and CPU usage in a instant. as the result, when user face such problem, they do not really need to refresh the entire page.

as the result, my safety plug worked. The recent Windows XP tour SWF that i just uploaded tries to access intro.txt on a path where simpleWebFlashViewer is located, which is nonexistent.

even if it tries to access beyond that, i guess it's pretty impossible as there's only /media, /proxy, and /static/flashplayer (this is where simpleWebFlashViewer live). as the player & loader itself is on different subdomain (media.fedinet.waltuh.cyou) than the root (fedinet.waltuh.cyou). image

what if you're an admin of existing akkoma instance?

to be frank with you, i think you can just simply alter the CSP header to achieve the same thing as what my akkoma-BE did, and then simply use my akkoma-fe on your instance. You do not need to trust me because i highly doubt you would, but unless you're fine with experiments, then go on.

you then start serving simpleWebFlashViewer on different subdomain, say like, your own media domain at https://fedimedia.yourdomain.com/static/flashplayer/. if you're using caddy, then setting this would make sense:

  handle_path /static/flashplayer* {
    root /var/www/flashplayer/
    header "Cache-Control" "public, max-age=604800, immutable"
    header "Content-Security-Policy" "script-src 'self' 'wasm-unsafe-eval'; style-src 'self' 'unsafe-inline'; frame-ancestors 'self' https://fedinet.waltuh.cyou"
    file_server
  }

you will need to make client's browser to cache ruffle WASM aggressively because the runtime itself is sized at around 13 MB.

now, on your akkoma, copy instance/static/frontends/pleroma-fe/<ref>/static/config.json to instance/static/static/config.json, and edit the config here, adjust your flashplayer_loader to look like this:

{
  ...
  "flashplayer_loader": "https://fedimedia.yourdomain.com/static/flashplayer/"
}

conclusion

an anime girl with a wide mouth open on a fan

well, what do you think? although the solution that i made is a bit overkill, it turned out to ended up being better than worse. for such, arbitrary risks are somewhat became minimized as the ruffle itself ran on different page context now (iframe).

if you're curious, how about giving it a try in fedinet.waltuh.cyou? our instance is currently open for registration.

that's it from me.

you can also watch on the video that's showcasing this functionability.

happy hacking.

today, i'm going to talk on how to set up an email server with the components explained above

cirno holding a letter

your requirements will be: – access to alter rDNS of ur IP (tl;dr: set it to ur domain) – a domain that you can manage via DNS – a debian/openbsd server – a public IP address – a server that can serve port 25 (important: CONSULT to your ISP/VPS provider's ToS)

what we gonna do (in order so you won't cry): – set up dns – configure spf – configure dkim – step 1: make le key – step 2: make le DNS record – configure dmarc – get the server software working – prepare le postgresql database – prepare le mailbox – configure le opensmtpd – configure le rspamd – tool for inbox (dovecot)


set up the DNS first.

this part, we will configure these two things: – [required] SPF: basically a policy that tells a mail server, what mailer IP is allowed to send as your domain – [required] DKIM: ur key for signing ur mail on your domain behalf so anyone can't spoof their mail with your domain – [optional] DMARC: final verifier after SPF and DKIM


spf: the easiest thing to setup

assuming you have the following public IP address (that's also used as an ip used to relay ur mail to other email server): – ipv4: 203.0.113.5 – ipv6: 2001:db8::5

and then assuming your domain is waltuh.cyou,

you set a TXT record in ur domain's DNS. It should look like this:

v=spf1 mx ip4:203.0.113.5 ip6:2001:db8::5 -all

or if you only have ipv4:

v=spf1 mx ip4:203.0.113.5 -all

that's it. now let's continue


dkim: “this is my signature. i am real.”

step 1: make le key

get into ur server's shell, and go make a directory at /etc/mail/dkim.

go here, assuming ur current dir is now /etc/mail/dkim, do this:

openssl genpkey -algorithm ed25519 -outform PEM -out waltuh.cyou-ed25519.key

this will make a key under path /etc/mail/dkim/waltuh.cyou-ed25519.key. you can adjust it accordingly to ur use.

then, make the string of le dns TXT DKIM record:

printf "v=DKIM1;k=ed25519;p=%s" "$(openssl pkey -outform DER -pubout -in waltuh.cyou-ed25519.key | tail -c +13 | openssl base64)"

it should look like this:

v=DKIM1;k=ed25519;p=tM22KunOkYfEtLzvaUQQcwjUGw8c6hg/v24gIa46oSY=

step 2: make le DNS record

before we continue, you need to know this: so, in dkim, there's something called selector. you can, basically use it as a versioning of ur dkim. this will be important on both dns, and ur smtp server, especially when you're rerolling ur dkim key.

so, for dns, You must make a DNS record at <selector>._domainkey.waltuh.cyou (assuming ur domain is waltuh.cyou)

now, assuming our selector is mail2026, that would be mail2026._domainkey.waltuh.cyou.

so, make a TXT record here, put the previously generated DKIM pubkey that looked like this before:

v=DKIM1;k=ed25519;p=tM22KunOkYfEtLzvaUQQcwjUGw8c6hg/v24gIa46oSY=

now save it.

please rember the selector.


dmarc: “if spf fail, or dkim fail, what should i do?”

it's basically that.

It does stuff when SPF and DKIM are not valid,

this is optional, but i suggest you to use it.

You just put this DNS record into your domain, Specifically, _dmarc subdomain containing a TXT record that writes:

v=DMARC1; p=reject; rua=mailto:dmarc@waltuh.cyou

and then just apply it, Say, on _dmarc.waltuh.cyou

That's basically it.


the loud machine and it's endministrator

get the server software working

oh?

you finished the warmup?

OMEDETOU

now, i will just be quick here. Assuming you have debian or openbsd host, You need to install these:

  • opensmtpd
  • opensmtpd-filter-dkimsign
  • opensmtpd-filter-rspamd
  • opensmtpd-table-postgres
  • dovecot-core
  • dovecot-imapd
  • dovecot-lmtpd
  • dovecot-pop3d
  • dovecot-sieve
  • dovecot-pgsql

or,

apt install opensmtpd{,-filter-{dkimsign,rspamd},table-postgres} dovecot-{core,imapd,lmtpd,pop3d,sieve,pgsql} postgresql{,-contrib}

prepare le postgresql database

go start the postgresql database server (haven't init ur db? go init it first), then make le db for opensmtpd:

sudo -Hu postgres psql

then

CREATE USER opensmtpd WITH PASSWORD 'thestrong9stpassw0rd!';
CREATE DATABASE opensmtpdb OWNER opensmtpd;
\c opensmtpdb
CREATE TABLE virtuals (
    id SERIAL,
    email VARCHAR(255) NOT NULL DEFAULT '',
    destination VARCHAR(255) NOT NULL DEFAULT ''
);
CREATE TABLE credentials (
    id SERIAL,
    email VARCHAR(255) NOT NULL DEFAULT '',
    password VARCHAR(255) NOT NULL DEFAULT ''
);
CREATE TABLE users (
    id SERIAL,
    username VARCHAR(255) NOT NULL DEFAULT '',
    email VARCHAR(255) NOT NULL DEFAULT ''
)

yea. copy and paste w o k s

then, quit le psql by CTRL+C, open ur editor, can be nano in ur server, write this to initsetup.sql:

insert into virtuals (email, destination) values ('root', 'yonle@waltuh.cyou');
insert into virtuals (email, destination) values ('postmaster@waltuh.cyou', 'root');
insert into virtuals (email, destination) values ('webmaster@waltuh.cyou', 'root');
insert into virtuals (email, destination) values ('abuse@waltuh.cyou', 'root');
insert into virtuals (email, destination) values ('dmarc@waltuh.cyou', 'root');
insert into virtuals (email, destination) values ('yonle@waltuh.cyou', 'vmail');

insert into users (username, email) values ('yonle@waltuh.cyou', 'yonle@waltuh.cyou');
insert into users (username, email) values ('noreply@waltuh.cyou', 'noreply@waltuh.cyou');

insert into credentials (email, password) values ('yonle@waltuh.cyou', '$2b$...');
insert into credentials (email, password) values ('noreply@waltuh.cyou', '$2b$...');

the virtuals is an alias of your email address. To put it simply, If one sends an email to abuse@waltuh.cyou, It will gets delivered to yonle@waltuh.cyou. Cool right?

when the virtual address got vmail in the end, it then gets delivered to your inbox, marking the end of the alias venture. for example, if yonle@waltuh.cyou alias is vmail, that means the mail will be delivered to an mailbox specifically for yonle@waltuh.cyou.

adjust it according to your use, then

sudo -Hu postgres psql -d opensmtpd -f initsetup.sql

you wonder about password hashing? it must be in bcrypt. google how to generate a blowfish hash, coz there's so many out here.

but i don't wanna

OKAY, OKAY, YOU FUCK. GO INSTALL A GO COMPILER, MAKE A FOLDER, OPEN A EDITOR, WRITE THIS CODE:

package main

import (
        "fmt"
        "golang.org/x/crypto/bcrypt"
        "os"
)

func main() {
        if len(os.Args) != 2 {
                panic("give me password.")
        }

        hash, _ := bcrypt.GenerateFromPassword([]byte(os.Args[1]), 12)
        fmt.Println(string(hash))
}

then, compile it

go mod init a
go mod tidy
go build -o gen

and do your thing

./gen 'yourstr0ng9stp4ssw0rd!'

it will emit, like this:

$2a$12$wrhwp/F/vakmnznJvDyrfOlNzKVZYtOY05CMBcvVQi8LOmEJimLI6

prepare le mailbox

tl;dr:

useradd -m -r -u 5000 -g mail -d /var/vmail -s /sbin/nologin vmail

# Make sure the mail directory exists
sudo mkdir -p /var/vmail
sudo chown -R vmail:mail /var/vmail
sudo chmod -R 700 /var/vmail

configure le opensmtpd

in debian, the config is located at /etc/smtpd.conf.

in openbsd, it's at /etc/mail/smtpd.conf.

but the tl;dr: go use my config here.

and then, go to /etc/mail

make the following files: – hosts: a file listing the IPs that you will use for relaying email to outside – mailname: ur mail name – postgresql.conf: configuration for opensmtpd to connect to ur postgresql

you can check for the examples here.

before we start our opensmtpd, first,

configure le rspamd.

just, go use my config here in ur /etc/rspamd/local.d/settings.conf.

then start ur rspamd, systemctl enable --now rspamd


before we be all good and all,

chown _dkimsign:_dkimsign /etc/mail/dkim/waltuh.cyou-ed25519.key

also, ensure that ur ssl keys are owned by root. if not, opensmtpd won't start.

finally, start ur opensmtpd. systemctl enable --now opensmtpd

if everything goes smooth, congratulations,

You just started ur first mail server!


tool for inbox (dovecot)

this is your final part, before you finally able to go to rest.

it's just 2 deadass simple config. go check my config here.

you will then just need to alter psql connection in local.conf before you start dovecot server by running systemctl enable --now dovecot


finally,

go try connect ur email server and try send email.

be note though, sending email to gmail or yahoo or outlook might make ur mail to be ended up in spam folder at first & this is normal. you just need to make ur ip & domain rep to be good and you're all good to go. The button [Report as not spam] is a thing for the purpose.

alright. hope this helps you.

bye.

i won't believe that someone would be interested on my fork. and to be frank, these modifications are mostly done just to fix the problems when using fedi as a third world country user.

our akkoma instance, fedinet.waltuh.cyou

this is absurd.

i will start by saying this: everything that you are going to use here will be completely experimental, so stability is NOT 100% guaranteed all while i'm trying my best on keeping the things as stable as possible.


system requirement

minimum requirements: – cpu core: 2 (1 will still work) – ram: 2 GB (1 GB will still work, but it's a real gamble) – SSD storage: 50 GB – fast af internet speed on server – probably shared unlimited bandwidth usage

recommended requirements: – cpu core: 4 (or more) – ram: 4 GB – SSD storage: 60 GB – same internet speed – same bandwidth usage


set up the akkoma

i won't recreate the entire akkoma manual because i have did it before, however it doesn't even mean that you need to follow that either. but, follow what's on akkoma docs first.

now, about the clone URL, you might clone this:

git clone https://akkoma.dev/Yonle/akkoma.git -b master /opt/akkoma

and then continue installation as usual.

however, in your prod.secret.exs, please make sure that you have this being set:

config :pleroma, :media_proxy,
  enabled: true,
  redirect_on_failure: true,
  base_url: "https://media.yourfedi.com"

config :pleroma, :media_preview_proxy,
  enabled: true

for frontend, please read this.

but i have an existing akkoma instance already

good news: it doesn't conflict to the upstream. so,

git remote set-url yonlemodif https://akkoma.dev/Yonle/akkoma.git
git pull yonlemodif master
git switch master

then, do the normal update task:

mix deps.get
env MIX_ENV=prod mix ecto.migrate

then, modify your prod.secret.exs to enable media proxy+media proxy preview just like above (or do it via admin fe), then configure frontend the same as above,

and restart your akkoma backend.


after you've got the akkoma properly running, now,

set up the mediaproxyoma+go-bwhero

there are many ways of setting up mediaproxyoma+go-bwhero. the easiest will be you running it with docker compose by first,

via docker compose (the easiest)

modify the docker-compose.yml with proper environment variables, as described on mediaproxyoma's README, then, just do this:

docker compose up

that's all.

manually, or via incus

if you're an admin that tries to not use docker on server like me, then,

  • set up an alpine linux edge container
  • began compiling and installing two things
apk add git nano go vips-dev vips-magick imagemagick ffmpeg pkgconf
git clone https://github.com/Yonle/mediaproxyoma
git clone https://github.com/Yonle/go-bwhero bwhero

for p in mediaproxyoma bwhero; do
  cd $p
  go build -o ../$p
  cd ..
done

doas mv mediaproxyoma bwhero /bin/

then, copy what's in mediaproxyoma/installation/init.d/* directory to /etc/init.d/, edit both /etc/init.d/mediaproxyoma and /etc/init.d/bwhero, then enable & run them.

cd mediaproxyoma
mv installation/init.d/* /etc/init.d/
nano /etc/init.d/bwhero
nano /etc/init.d/mediaproxyoma

service mediaproxyoma start
rc-update add mediaproxyoma

now, configure the reverse proxy

the reverse proxy that we will use here is caddy. you can use anything equivalent to it, but to be short, you can basically do the following.

this is the Caddyfile for: – akkoma listening at 127.0.0.1:4000 – mediaproxyoma listening at 127.0.0.1:8080

yourfedi.com {
  log {
    output file /var/log/caddy/akkoma.log
  }

  encode zstd gzip
  reverse_proxy 127.0.0.1:4000
}

media.yourfedi.com {
  @mediaproxy path /proxy/*
  @robots path /robots.txt

  log {
    output file /var/log/caddy/media_fedinet.log
  }

  handle @robots {
    header Content-Type text/plain
    respond "User-agent: *
Disallow: /"
  }

  handle @mediaproxy {
    reverse_proxy 127.0.0.1:8080 {
      transport http {
        response_header_timeout 32s
        read_timeout 32s
      }
    }
  }
}

optional, but SHOULD reduce your load A LOT

if you have a big storage space in your server host, i recommend you to setup a Varnish cache server. It's small, yet light & fast af.

you can check our setup here. Adjust it accordingly to your setup


closing

so, how is it feels?

i don't know what your real comments are, but if things feels smooth than before, then, good. welcome to the feeling that the third world country need.

that's it. bye.

here is fedinet.waltuh.cyou, my very experimented akkoma instance that's probably ever done. there are many customizations that has been done in the backend and frontend side, too.

the screenshot of the akkoma in web

it has been the second month of me maintaining this thing all on my own. behind of this simple looking thing, there's actually some great amount of effort that has been done with this very setup.

the setup

if i draw the entire thing as a diagram, it looked like this:

the entire akkoma diagram

you see, usually you only put akkoma backend here and it's “that's it” most of the time. This works, usually. but not on our very setup where we are relying on contabo storage, which often times has failures (eg, random outage, timeouts, instability, etc).

and so that's how the journey starts. i guess?

problem 1: the medias is big, and akkoma handled it wrong, by default

generally, nobody is wrong on how they're uploading their avatar, banner, medias, and etc. for this one, it's mostly on how we serve it to the client.

and there's also nothing wrong on trying to preserve the original as much as possible.

so much so, we ended up serving a big image on a small <img>: the google pagespeed report that points onto user avatar

this is no problem for those with fast and unlimited internet, but became a problem to those with bad peering route to the server (which causes connection/loading to the server to became rather slow), and those that's in limited bandwidth (roaming users, i see you).

to fix this, we can enable media proxy that's in our server, and then enable post thumbnailing. this, works. but somewhat terrible in several ways. the mediaproxy in akkoma here: – can only process what's on jpg – anything else are being redirected / served raw

the thing is, some niche & weeb communities that's in fedi are mostly posting their favourite fanarts in PNG, which barely fix the main bandwidth problem.

let's remember this: an 20 seconds long high quality animated GIF is bigger than a video with the same duration and size. which then ended up with: – your user's browser loaded 20 MB of GIF that they never interacted or expand with. – your user that scroll past through a post with multiple raw camera photos will load atleast 5 MB per images without even expanding – a pixel GIF being bigger than an animated webp.

a problematic bandwidth problem, a problematic solution, well. it seems we're stuck and we began to think of switching client or the entire software server entirely that has zero issues with these.

and so,

here comes mediaproxyoma

you see, for media thumbnailing, pleroma apparently has began using libvips for the thumbnailing processes, while akkoma to this day is still... relying on an deprecated convert command by imagemagick to maintain compatibilities with debian host.

to be frank, if you have read my previous post on the waltuh.cyou setup, the host is actually also running debian. The only differences here is that i'm isolating every single services via an container using incus with each container using alpine linux as the base in order to get the latest dependencies. so, how do i fix the problem here?

i make my own media proxy backend. yeah. that's what i did. since i already have an image compressor proxy backend called go-bwhero, pairing it with mediaproxyoma seems ended up giving a great match. One process mostly focusing on proxying, and the other processing and making thumbnail of the image.

so, after some testing, i run the media proxy backend in my server, immediately changed any path that goes to /proxy/* to this backend instead. initially, it has several bugs, but then after several reports and spotted problems, it's working pretty much really well ever since then.

the same goes to go-bwhero. after several experiments, I ended up adjusting several parameters that speeds up the processing to be just 0.4-1.1 seconds on average.

here's the snippet of out Caddyfile. Our akkoma backend is listening on 10.154.198.6:4000 and the mediaproxyoma backend is listening on 10.154.198.11:8080:

fedinet.waltuh.cyou {
  log {
    output file /var/log/caddy/akkoma.log
  }

  encode zstd gzip

  reverse_proxy 10.154.198.6:4000
}


media.fedinet.waltuh.cyou {
  @mediaproxy path /proxy/*

  log {
    output file /var/log/caddy/media_fedinet.log
  }

  handle @robots {
    header Content-Type text/plain
    respond "User-agent: *
Disallow: /"
  }

  handle @mediaproxy {
    reverse_proxy 10.154.198.11:8080 {
      transport http {
        response_header_timeout 32s
        read_timeout 32s
      }
    }
  }
}

problem 2: it's redownloading the same exact things

hooray! we now have a proper image compression that saves a lot of user bandwidth. now what?

GUESS WHAT,

EMOJI FILES KEPT BEING REFETCHED ON EVERY WEB SESSION THAT WE OPEN. So did wallpaper, banner, user avatars, emojis, EVERYTHING. Guess what that means? The bandwidth is still not solved!

Introducing: immutable in Cache-Control

The immutable response directive indicates that the response will not be updated while it's fresh

it doesn't matter what your concern here, but hear me out: emojis files, user avatars endpoint, thumbnails, and practically any media endpoints that we knew are mostly ended up giving the same bytes. since akkoma-fe itself will refetch these again to the server, it's going to be another waste of duplicate bandwidth.

so, we decided to set our Cache-Control to be this:

Cache-Control: public, max-age=1209600, immutable

then, we apply it to these endpoints: – /emoji/*, /static/js/*, and /static/css/* on akkoma fe endpoint – /media/* on our varnish setup – /proxy/* on our mediaproxyoma

so, our Caddyfile looked like this now:

fedinet.waltuh.cyou {
  @static path /static/js/* /static/css/* /emoji/*

  log {
    output file /var/log/caddy/akkoma.log
  }

  encode zstd gzip

  handle @static {
    reverse_proxy 10.154.198.6:4000 {
      @success status 2xx

      handle_response @success {
        header Cache-Control "public, max-age=1209600, immutable" {
          defer
        }
        copy_response
      }
    }
  }

  reverse_proxy 10.154.198.6:4000
}

update Feb. 23th 2026: our forked akkoma-be now set the immutable cache-control flag, so this reverse proxy approach is no longer needed, unless you're running the original akkoma/pleroma backend.

notice that we did not alter mediaproxyoma. That's because, mediaproxyoma has set this by default on the backend, so did our varnish setup sets it too.

as the result, the next time user refresh or open akkoma-fe again, the browser only loads what's not being cached, and then use what's being cached, effectively reducing a large amount of bandwidths, and the web loads faster.

problem 3: the avatar, cover/banner, and even our local posts is still big.

we have neutralized the damage a bit. but now we face the real problem: the avatar being bigger than the post thumbnail. what sorcery is this?

this is actually a problem in akkoma-fe where the original code choose to load the original image instead so animated avatar remains working because apparently, mediaproxy implementation in akkoma here has a very limited capabilities on thumbnailing as i've mentioned previously.

this is understandable... but, we're using our own my own media proxy backend, which: – convert literally any kind of known supported image formats to webp – convert GIF to animated WEBP quick enough (seriously, i converted a 21 MB of 60 FPS gif to Animated WEBP, ended up just sized 2 MB, all while just taking 1.8 seconds) – thumbnail a video to webp quick enough

and guess what we do this time.

we fork akkoma-fe, with several changes: – use THUMBNAILED ENDPOINT for PFP and COVER (except for viewing the original) – NEVER AUTOLOAD VIDEO, Even just the metadata. We read the thumbnail by using poster attribute that's in <video> element instead.

as the result? previously, it took like 50 MB to load at first, but now:

WE NOW ONLY USE 25 MB ON FIRST LOAD!

WE NOW ONLY USE 25 MB ON FIRST LOAD!

look at these. the video showed 0 seconds literally means that we didn't preload anything: screenshot of a post with a video the same thing but different video again and again

now look. we got it working, but now we also want to treat the same to our local users that upload medias. akkoma-be, for the same mediaproxy limitation reason, it bypass anything from local upload. Makes sense, But not for us (where the media server is literally hosted outside of our host).

since we're using our own mediaproxy implementation, as usual, we forked akkoma-be too, fix the problem, and it works in fedinet.waltuh.cyou.


for the changes itself, i've sent some to the upstream, but after realizing how limited their mediaproxy implementations are, i decided to draft the two PRs that i made: – https://akkoma.dev/AkkomaGang/akkoma/pulls/1067https://akkoma.dev/AkkomaGang/akkoma-fe/pulls/482


closing

should you try these tricks that i've made tho?

well, i would say, it depends. But if you also want to make sure that user from third world country to be able to enjoy using your instance, Then you probably also want to follow the same approach that i do.

the reason i did all of these is because i am an user from third world country myself. that should give you an enough picture already.

mediaproxyoma and go-bwhero, the thing that i wrote, is surprisingly low on memory usage even after thousand of traffics everyday.

well, that's it. thank you for reading my silly journey.

if you're interested though, you can try replicate my setup

Frame from "CENSORSHIP" by WORLD ORDER

When people finally get a platform where they control their own space, they often end up building the same old jail.

While it's no different from typical mainstream censorship, it's mostly done ugly than better. if said the least.

Consider some few examples:

  • Nostr promises “censorship resistance,” yet some users still want to dictate how the whole network behaves.
  • Mastodon, filled with Twitter refugees, has users begging for a “Reply Control” feature just like on Twitter, Facebook, or Instagram to protect their sanity. The intention is fair, but almost useless in a federated network.

Let's see the following issues: – https://github.com/mastodon/mastodon/issues/14762https://github.com/mastodon/mastodon/issues/8565

These two has the following demands: > Restrict replies to: > – Accounts mentioned in the post > – Accounts you follow + mentioned accounts > – Or no restriction > – Disable replies entirely

Fine idea if your platform stores everything in one place. Not fine if it’s federated.

You might clean up your comment section, but remember: other instances see everything.

These controls only work on the centralized platform

Sign of private garden in London, https://unsplash.com/photos/green-and-white-wooden-signage-Ly7dRlBg7UY

Centralized platforms can actually enforce restrictions effectively because: – Single source of truth. All posts, replies, and permissions live in one database under one admin. – One garden, one gardener. Users interact inside the same platform; nothing leaks outside. – Centralized moderation. Bans and rules apply network-wide instantly. – No federation headaches. No need to respect or reconcile policies from other servers.

Federation flips that model. Your “reply control” is local to your server. Other servers don’t care about your neat fences.

What actually happens (diagram)

Expectations:

People comment -> Your homeserver -> Your homeserver broadcasts to everyone

Reality:

Person A comments -> Person A’s instance -> Person A’s instance broadcasts to everyone (including your users)

Consequently, the concept of a fully controlled comment environment on your instance is misleading. Actions such as blocking or muting a user only alter visibility locally; other instances continue to receive and display the original content along with all associated replies.

But why did you join Mastodon in the first place?

Ask yourself: why are you here?

Why this instance? Why this community? Or did you just click the first signup link that worked?

Let’s be honest: why did you even sign up? – Was it to escape Twitter’s nonsense? – To get away from toxic algorithms? – Or because someone told you Mastodon is “censorship-resistant”?

Now, ask yourself this: why this instance? – Did you pick it because of its community, moderation, or philosophy? – Or did you just land there by accident, because a friend recommended it, or the first signup link you clicked worked?

The truth: your experience depends entirely on your instance. Filters, reply controls, or “censorship resistance” mean nothing if the admin is inactive, moderation is weak, or the fediverse dumps chaos into your feed.

If you chose poorly, you will end up in the same hell you were trying to escape.

Solution?

If you want actual control, do one of the following:

  • Self-host an instance. Managed hosting exists if you don’t want to deal with administration.
  • Use filters. They exist; learn and apply them.
  • Switch instances. If your admin is inactive or moderation is poor, move.
  • Return to Facebook. Centralized cages are still the only way to achieve a perfectly sanitized environment.

here comes waltuh

how waltuh.cyou was birthed, and etc.

ariana.afnet.us is down.

before i began, first, i am an admin of lecturify.net, a volunteer. Still was today but no longer as active as i am in waltuh.cyou. One day at around September 2025, ariana server went down permanently in all out of sudden. we did not managed to do a backup in time due to lack of resources. as the result, the following services is no longer available:

  • yonle.lecturify.net, my own website
  • fedi.lecturify.net, my first self hosted akkoma instance run since 2022
  • thelounge.lecturify.net
  • soju.lecturify.net
  • www.lecturify.net (the lecturify website)

we still had an openbsd server running named oraneg at lecturify.net, but i already lose motivations to keep things up from here.

so i ended up thinking on purchasing my own VPS and a domain instead.


initially, my initial goal was to sell some sort of a shared NAT VPS for those whose looking for cheap hosting, but after i own a VPS for once again, making a maintenance and moderation for it is simply rough enough to be tolerated. so i back down on that plan.

here comes the VPS

on january 10, 2026, i purchased a VPS at Contabo, I picked the smallest packet one (Cloud VPS 10), and then purchased 250 GB of object storage because i am afraid that i got less SSD spaces.

but little did i knew, i literally missed the info. Didn't saw that the VPS that i have will be already bundled with 150 GB of SSD, and then, this object storage that now i am not sure when & how to use it...

All of it for $10/month...


either way, this is my setup:

  • we're using debian trixie for the OS since i don't wanna deal much with unnecessary updates
  • incus as the container for services because i am used with administrating stuffs already

the incus setup

about the incus container setup that i have ...

  1. each container got their own local ip, which they can communicate with each other via a bridge
  2. the host can access to the services that those containers bind on their container without the need of tcp forward or similar

so, basically,

caddy(host) ->
-> 10.xx.xx.6 akkoma(container) -> akkoma backend
-> 10.xx.xx.5 postgresql(container) -> postgresql backend

since it's done because of bridge, there should have no performance impact at all.

the thing is, despite of how the flow looked. every services are being setted up manually per container. I know that i can simply just use docker/podman to make the flow goes easy, but there's simply something that's lingering on my mind about these solutions.


now, that we've talked how the setup goes, let's take a look at the domain that i use.

the domain

initially, i only purchased the domain first before the server itself. i've looked around several domain registrars, and i remembered about namecheap. so i came check here.

because this project is all not so serious, i picked “waltuh” as the name. it did have a connection with Walter White, and the reason i picked the name is because i like doing experiments, and so i picked that.

for the tld, there is actually several a bunch of TLDs out here that is cheap, But one thing that caught on my eye is .cyou TLD. when i googled about it, it's basically a short for “see you”.

so, the end result is, “waltuh.cyou”. “Waltuh, See you”.

the initial purchase costed around a buck per year. I thought i will only pay for that amount every year, But one i checked how much it will actually cost, it's $19.68/year. It's indeed expensive for us Indonesian as our IDR currency is weak against USD, but given that it's just one time per year purchase, I don't see any bad reason for it.

and here we are.

well, what are we even doing here, anyway?

now, as of the time i'm writing this, waltuh.cyou is now a month old.

Enter your email to subscribe to updates.