waltuh

Reader

What's on the waltuh user's mind?

from Yonle

laptop lamaku foto: laptop lamaku

sebelumnya, saat saya sedang mari laptop untuk menggantikan laptop lamaku yang Pentium T4400, gw ketemu juga sama laptop-laptop ThinkPad, terutama di mall-mall di Batam seperti di Nagoya Hill. Setiap ThinkPad yang ada disini berbeda-beda modelnya, namun setelah ku lihat,

sebenarnya tidak ada satupun laptop yang dijual memiliki spek yang kurang nanggung untuk harga yang sama dengan laptop yang aku pakai saat ini.

rata-rata processor yang dipakai di Thinkpad ini berkisaran i3 sampai i5 yang hanya terbatas pada gen dari 6 sampai 10. Itupun dengan RAMnya kadang nanggung juga speknya tak peduli jika ku pakaikan linux ke laptopnya.

pada akhirnya, aku beli MSI Modern 14 C12MO yang memiliki prosesor i3-1215u (alder lake, gen 12), dengan NVME Phison 256 GB & RAM sekitaran 8 GB. Walaupun RAMnya masih nanggung, namun untuk harga dan pilihan yang ada, kira-kira inilah yang cocok untuk kerjaanku yang dimana lagipula, Gw bukan seorang gamer yang sering ngegame berat.

“[brand] punya masalah engsel”

awalnya kenalan kuliahku infokan bahwa laptop MSI condong rentan sama masalah engsel, namun menurutku ini masalah yang sebenarnya biasa di semua laptop jenis engsel verleher 2. Jadi maupun aku beli yang bermerek ASUS vivobook pun, secara durabilitas sama saja.

pemakaian dengan arch linux sudah lebih dari 6 bulan, dan syukurlah masih awet.

kebetulan, ada orang yang beli laptop asus atau thinkpad karena sparepartnya mudah dicari, namun menurutku,

semakin tua umur sebuah laptop, maka semakin langka pula sebuah sparepart untuk bagian-bagian laptopnya. Merek apapun laptop itu, Sparepartnya tetap sulit dijangkau walaupun stoknya banyak.

framework, walau secara desain memang modular, Secara spesifikasi & modal sebenarnya nanggung. Itu pula belum termasuk biaya bea cukai / pajak dll sebagainya yang dikirim antar benua.

jadinya, laptop apapun itu, sebaiknya dijaga sebaik mungkin apapun mereknya. walau sparepartnya ada, kadang ngeservisnya yang bakalan ribet atau justru nanggung sampai di titik dimana lebih waras menabung untuk menggantikan laptop dengan laptop baru.


pokoknya poinku begini: fokus pragmatis dulu. jangan beli gegara gengsi atau branding atau janji-janji segala macem kalau ujung-ujungnya bakalan sama aja kayak hardware lainnya.


sepucuk tips sederhana.

jika ada sebuah model sebelahan yang memiliki spek yang 2x lipat lebih besar dengan harga yang 10% lebih mahal daripada opsi yang kamu kejar, lain kali tabunglah untuk mesin itu.

kadang ada kalanya kamu mungkin awalnya agak terganggu sama dompet yang kering, tapi lama kelamaan hasilnya tidak akan mengecewakan. karena sebenarnya, kadang beli yang kecil lebih mengecewakan dan kandang malah merugikan.

 
Read more...

from ひとりワンルーム

Link Album: https://slowerpace.bandcamp.com/album/marvellus

Dengarkan serta support artist melalui Bandcamp berikut: https://slowerpace.bandcamp.com/music

Enak didengar ketika: menikmati afternoon breeze bersama kopi serta sebatang rokok Aftertaste: mengapa rokok-ku cepat sekali habisnya... Best track(s): Air Hush For fans of: Koop, Oblique Occasions, haircuts for men

#Music #AlbumSpotlight

 
Read more...

from ひとりワンルーム

KOKOROKO merupakan album pertama dari musisi beranggotakan delapan orang yang berbasis di London, UK. 'Kokoroko' merupakan kata dari bahasa Urhobo, yang berarti 'be strong'. Dirilis pada 8 Maret 2019, album ini berisikan empat trek bernuansa afrobeat serta R&B yang sayang untuk dilewatkan.

Support Kokoroko melalui link Bandcamp berikut: https://kokoroko.bandcamp.com/music

Enak didengar ketika: ingin mencari lagu dengan genre serupa Incognito Aftertaste: sekali dengar tak cukup Best track(s): Adwa For fans of: Incognito, Nubya Garcia, Tom Misch

#Music #AlbumSpotlight

 
Read more...

from ひとりワンルーム

Warning: mh-, suicide.

Bulan Januari hingga Februari lalu depresi berat saya muncul kembali.

Karena tahun kemarin karir saya bisa dibilang tak berjalan baik, jadi saya berpikir “semoga di awal tahun ini saya bisa mendapatkan pekerjaan kembali.”

Jadi, di bulan Januari lalu saya berusaha keras mencari pekerjaan di LinkedIn. Sudah apply kesana-kemari, namun sebulan pun berselang dan tak ada kabar sama sekali. Puluhan bahkan ratusan pekerjaan saya apply, namun hampir semuanya di-ghosting (dan hanya satu kali interview).

Lalu, bulan Februari pun datang, dan depresi saya makin memburuk.

Di bulan ini, saya sempat berpikir beberapa kali untuk mengakhiri hidup saya sendiri. Saya bahkan sudah menyiapkan peralatan-peralatan dibutuhkan untuk mengakhiri hidup saya. Saat konsultasi ke psikiater bulan itu, saya mengatakan segalanya kepada dokter saya hingga menangis, karena tak tahu harus berbuat apa. Beberapa hari setelahnya, saya pun mengatakan kepada orangtua saya bahwa saya ingin mengakhiri hidup. Sontak, orangtua saya pun menangis, baik Ayah maupun Ibu. Saya hanya diam saja mendengar kedua orangtua saya menangis saat itu. Entah apakah keputusan saya dengan mengatakan kedua orangtua saya itu tepat atau tidak, namun setelah mengatakannya saya pun mengurungkan niat saya untuk mengakhiri hidup.

Lalu, Ramadan datang. Di Ramadan kali ini, saya berniat untuk ibadah lebih khusyuk.

Namun takdir berkata lain. Ayah saya pun jatuh sakit dan sempat dirawat empat kali di tiga rumah sakit yang berbeda. Alih-alih beribadah khusyuk, saya justru menghabiskan waktu menjaga Ayah saya yang saat ini kesulitan untuk bergerak, baik itu berjalan maupun duduk. Tarawih dan solat lima waktu saya lakukan di rumah dan di kamar rawat rumah sakit alih-alih di masjid.

Lalu, di minggu terakhir Ramadan, saya pun jatuh sakit. Badan saya lemas sekali, merasa mual, serta batuk-batuk. Bisa dibilang, Ramadan kali ini rasanya lebih berat dibandingkan tahun lalu.

Dengan segala tantangan tersebut, saya berusaha semampuku dan berserah diri pada Tuhan setelahnya.

Idulfitri pun datang, dan syukurlah depresi saya berangsur-angsur pulih. Frekuensi memikirkan cara untuk mengakhiri hidup pun makin jarang.

Sekarang ini, saya merasa lebih baik. Terima kasih banyak, terutama keluarga serta teman-teman di Fedi yang selalu support saya. Terima kasih banyak.

#LifeUpdate

 
Read more...

from Yonle

when we use something like mastodon, pleroma, akkoma, snac, gotosocial, or misskey, these are actually software suites that's used for microblogging that then can talk to each other via activitypub protocol. as they can talk to each other regardless the software suites being used, here comes a network for it called “fediverse”.

since i have hosted various fedi server for like 3 years, i think it's about the time i discuss about the experience when i host it. please note, on this blog, i won't really tell which one is better and the entire resource usage in a specific number as these number will became irrelevant as the older this blog has become. i will only mention what they can do, and what they can't do so you can judge it yourself.

os-tan

akkoma

this one, is, obvious already. akkoma is basically a hardfork of pleroma with different mindsets in mind. basically same heart, different director. since it's pleroma-based, it can handle a lot of users pretty decently light on server without needing a lot of server resources.

this is basically the first fedi software that i use to host my first server, at fedi.lecturify.net. the reason of why i pick this is, for one, MFM and the akkoma-fe.

what it can do: – react like misskey – post like misskey with MFM – MastoAPI – serve for (lot) of active users in small amount of resources. – bubble timeline (curated timeline of post coming from several instances that admin picked)

what it can't do: – 1:1 compatible with the og MFM – algorithm / trending post timeline – have avatar decoration like misskey

honk

this is the second software that i try as a test because i'm bored. to begin with, this is a fedi server software written by Ted Unangst, an author of doas and several other OpenBSD components. it focused for small system resource & single user. it can be used for multi users, but it's mostly suited well for single user as far as i've tried.

what it can do: – post. – anti distraction. – sqlite3

what it can't do: – MastoAPI – your posts are not entirely preserved. it's designed for noisy and limited system resource, remember? – be sane (you and the people that you follow avatar's are not their avatars. it's a hex pixel avatar generated randomly per user) – react and post like misskey. you can adjust reaction by changing the default so it's beyond other than [star], but still limited regardless – no stat report. you don't know whose following you, whenever your post is liked or reposted, it's anti mainstream. good for people to avoid distraction. – you can only upload videos in the form of “memes” where you need to upload the video to the server and put it in memes folder. – sqlite3

pleroma

again, this is obvious. but anyway, pleroma is a software written by lain, or also known as lambadalambada. it's initial purpose is basically to replace gnu/social.

anyway, as previously said, it can handle a lot of users pretty decently light on server without needing a lot of server resources.

what it can do: – react like misskey – [soon]post like misskey with MFM – MastoAPI – serve for (lot) of active users in small amount of resources.

what it can't do: – algorithm / trending post timeline – have avatar decoration like misskey – have a lightweight web client.

snac2

at the time when i hear this, snac2 is still really new. like really new.

like honk, it's focused to be as lightweight as possible and focused for small system resource.

what it can do: – MastoAPI – react like misskey (new thing. cool) – run in toaster that's running netBSD – run in a hacked 4G alibaba modem dongle running debian – run in a hacked chinese wireless CCTV – run in a hacked wii – run in literally everything that can connect to internet – run in less than 500 MB of RAM

what it can't do: – managing database manually like the most solutions that's using sql. but it's pretty solid. – mfm – s3

misskey

misskey is basically written by a Japanese high school student back then, named Syuilo. It got several inspiration from various platform like discord.

i won't really talk about the forks out here since core wise, it's computational power needed will be pretty much the same (2c vCPU minimal, 4c vCPU recommended).

what it can do: – mfm – customize profile – manage storage limit per user – bunch of features

what it can't do: – run in less than 2 GB of RAM even for just 3 users. – MastoAPI (forks can do) – edit (forks can do)

note: rough maintenance (nodejs must be match to the version it require to run, must be glibc system)

gotosocial

basically a “small” and “lightweight” fedi server. i haven't tried much, but i only tried for brief moment when i test mostr.pub federation with gotosocial

what it can do: – MastoAPI

what it can't do: – algorithm / trending post timeline – anything misskey related – have a web client built in. – have a proper activitypub compatibility (last time it clashed with snac2. turns out one AP object field can be either string or array. but the question is, why this way?)

note: comparably as big as akkoma/pleroma. just go host akkoma/pleroma already tbh. it's not even as small as snac2 or even honk compared.


alright. i think that's all.

 
Read more...

from Misa

In physics, we basically do measurement on all things that can measured. We see a phenomenon or an object, we observe it, take a measurement, and make a mathematical model for describe and predict the behavior of those things.

Since ancient times, Human love to compare things. Imagine if we want to take a free pizza from a party, we would try to see which one is bigger or smaller piece, and of course we will take a bigger piece, right? but how do we know it is bigger that the other piece? simple, just compare it with other piece and you can tell by just looking at it. hey it is a bigger pizza.

Then, there is a question, if something is big, how big is it? If something is long, how long is it compared to another one? Which one is actually bigger? We cannot just say “big” or “small” without knowing how much, right? So we need a way to compare a quantity with another quantity of the same kind. From there, math number is introduced to define and express that quantity clearly.

lets take an example, there is a book and there are identical pens. how we can define how long that book is? we can put pens in row along book and we can tell

“Hey, the book is two pens long”. The length of the book, which is what we measure, is called a quantity. The pen is the object we use as a comparison, which we call a unit, and 2 is the measurement value. With this, we know the length of the book equals the length of 2 pens.

That is measurement. Measurement is the process of comparing a object quantity with another object as a unit. We only can do measurement with comparing quantity with same quantity. length book with length pen, weight with weight, and much more.

The Systeme International (SI) of Units

Imagine we take a measurement like in the example before. Lets say our friend measures the same book using his own pen as a unit of comparison. There is a problem here are the length of his pen might be different from ours, right? If that happens, the measurement value can change. The book might be measured as 3 pens long simply because his pen is smaller. This can lead to confusion.

Now, let’s say another friend measures the same book, but instead of using a pen, he uses a marker, which is a totally different unit. He might find that the length of the book is one and a half markers, while we got a result of 2 pens. This means we can try to convert our unit (length of pen) into his unit (length of marker), which sounds great. However, there is a problem, the conversion process can lead to inaccuracy.

This problem actually happened in the real world. Since ancient times, humans traded using different units from different cultures and regions, which made conversion and price determination difficult. The inconsistency of units often led to fraud and unfairness in trade. This also happened in Ancien Régime, where until 1795, France used many different systems of measurement without a unified standard. There was even widespread abuse of measurement standards for taxation and trade.

The solution to this problem was the creation of a standardized and universal system called the metric system. Thanks to the French Revolution, this system was introduced and later became the foundation of the system are used today. Not go too deep into the history here.

The International System of Units (SI), which consists of 7 base units and its quantity that are widely used by many countries around the world

Quantity Unit Name Symbol Dimension
Length meter m [L]
Mass kilogram kg [M]
Time second s [T]
Electric Current ampere A [I]
Temperature kelvin K [Θ]
Amount of Substance mole mol [N]
Luminous Intensity candela cd [J]

Each unit has its own definition and history. For example, for mass, it is used a physical prototype made of a platinum–iridium cylinder called the kilogram. One kilogram was defined as the mass of that cylinder. This prototype was copied and distributed to many countries as the international standard of mass.

Of course, each definition has been updated over time as technology advances. The more of history of these developments on the official website of the International Bureau of Weights and Measures, the international organization responsible for maintaining these standards

https://www.bipm.org/en/history-si

 
Read more...

from Yonle

recently, my friend tried to register to my akkoma, but he didn't get any email anyway. Checked on my end via smtpctl show queue, but got no queue.

once i check the log, the mail server crashed because of two thing: – it can't connect to our postgresql db as it hasn't started yet – it can't bind to the smtp port

so, then i decided to modify the systemd service unit to make it always restart on every 10 seconds when theres a failure:

[Unit]
Description=OpenSMTPD SMTP server
Documentation=man:smtpd(8)
After=network.target

[Service]
Type=forking
ExecStart=/usr/sbin/smtpd

Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

hopefully this doesn't happen again.

 
Read more...

from ひとりワンルーム

Sebagai penikmat musik, apalagi nge-fans dengan suatu artis, pasti ada kalanya ingin sekali membeli tiket lalu menonton live concert artis favorit.

Saya pun demikian.

Sayangnya, di negara saya tinggal, penjualan serta distribusi tiket live concert betul-betul berantakan. Misalnya, si artis A mengumumkan live concert di sosial media mereka, lalu menjual tiket konsernya di Loket.com, lalu si artis B juga melakukan hal yang serupa, namun ia menjual tiket konsernya di Tiket.com, serta artis C bahkan sampai membuat website promosi serta penjualan tiket konsernya sendiri.

Betul-betul tersebar di banyak tempat. Tak terpusat.

Bagi orang yang jarang menonton konser (apalagi tak punya sosial media populer seperti saya), mencari event serta membeli tiket konser adalah sebuah kesulitan tersendiri.

Sekarang, bandingkan dengan Jepang.

Di Jepang, tiket event live concert serta event-event lain mudah ditemukan di eplus tickets. Bahkan, event-event besar dan terkenal seperti Fuji Rock Festival dan Summer Sonic dijual di sini. Tak hanya artis lokal, tiket konser dari artis luar negeri (yang mengadakan konser di Jepang) pun bisa didapatkan di sini. Pembayaran tiketnya pun juga mudah dijangkau, bahkan bisa dibayar di minimarket terdekat.

Bisa dibilang, distribusi tiket event live concert mereka dapat ditemukan di satu platform saja. Tak perlu susah-susah mencari.

Saya berharap suatu hari nanti, terdapat satu platform penjualan tiket yang memang khusus diperuntukkan untuk live concert (di negara ini). Artis pendatang baru, artis populer, hingga artis mancanegara dipersilahkan menjual tiket live concert-nya di platform tersebut, jadi fans tak perlu susah-susah mencari, serta pembayaran tiket konser pun mudah dijangkau banyak kalangan.

#Note

 
Read more...

from ひとりワンルーム

Pada tanggal 13 Januari hingga 23 Januari 2025 lalu, Ichiko Aoba mengadakan konser untuk merayakan peringatan 15 tahun sejak ia debut. Konser peringatan 15 tahun tersebut diadakan di dua kota, yaitu di kota Kyoto dan Tokyo, Jepang. Di bulan Januari 2026, album ini pun rilis. Berisikan 21 lagu, album ini direkam di Tokyo Opera City Concert Hall, Tokyo, Jepang.

Kalian bisa support Ichiko Aoba dengan membeli album-albumnya, salah satunya di Bandcamp berikut: https://ichikoaoba.bandcamp.com/music

Enak didengar ketika: duduk sendirian, menunggu seseorang, di antara hiruk-pikuk manusia Aftertaste: menunggu tak lagi membosankan Best track(s): ココロノセカイ (Kokoro no Sekai) (live at Tokyo Opera City Concert Hall, Tokyo, 2025) For fans of: Lamp, Kaede, mei ehara

#Music #AlbumSpotlight

 
Read more...

from Yonle

back then, fedinet.waltuh.cyou depends on eu2.contabostorage.com, or the contabo object storage for it's media uploads. initially we found it to be quite useful, but

the media is gone

the media is gone by itself

several weeks later, it also affected me

i also noticed that it's not just one of these photos that was actually gone, it's 4-7 of them. these are only noticeable after i restarted varnish cache (which effectively starting the cache from 0).

recovery is pain in the ass. for public post, they're easy to recovery by obtaining a copy from remote instance's CDN cache, but for some non-public post, these are hard to recover than you thought, unless you're also following them on your alt account.

initially, this problem happens to one of the post from my friend, Irfan, which i apparently noticed that the media that's on the post became 404. o thought it was temporary, but then slowly it's showing by itself that it also affected other users

i contacted the contabo support, but they're not even aware of it too

There are no deletion API being triggered via S3 in the backend that i hosted. It was just an upload, and then several hours later, gone by itself.

Dear Lee,

We would like to clarify that there are no deletion actions being triggered from our side in the backend. What about any lifecycle policy? Didn't have any that expires object after sometime? Please monitor it from now on and inform us if it occurs again.

let's check the akkoma source code, on this one: https://akkoma.dev/AkkomaGang/akkoma/src/commit/f3b39e9ea25bda9bb2e7611f6025499a3cc51c1d/lib/pleroma/uploaders/s3.ex#L34-L38

if we check how's the upload was done:

ExAws.S3.upload(bucket, s3_name, [
  {:acl, :public_read},
  {:content_type, upload.content_type}
])

it's basically just that.

but then, we have attachment cleanup worker, and yet that also doesn't seems going to break anything. i also checked other code that's relying on delete_file(), and yet nothing suspicious.

so that leaves me in a dead end.

time to do our own solution, i guess

hi. seaweedfs

seaweedfs logo

to be frank, previous setup on fedi.lecturify.net years ago uses minio for the s3 storage. even after that, there's still something that bothers me, yet, also don't like it at the same time.

so i was looking around on object-storage tag on github explore and then stumbled on seaweedfs that promise to have O(1). there's also rustfs, bht due to time constraints, i decided that it's not worth compiling it on the server. so i decide to compile seaweedfs instead, which only took 3 mins to finish.

right after i finished compiling it, i install it to /usr/local/bin/, and then made a openrc unit on the container:

~ # cat /etc/init.d/weed
#!/sbin/openrc-run

name="seaweedfs"
description="SeaweedFS service"

supervisor="supervise-daemon"
command="/usr/local/bin/weed"
command_background="no"
command_user="weed"
command_args="server -s3.externalUrl=https://objstorage.waltuh.cyou -s3 -volume.disk ssd -dir /home/weed/data"

no_new_privs="yes"

pidfile="/run/${RC_SVCNAME}/${RC_SVCNAME}.pid"

retry="SIGTERM/60/SIGKILL/5"

error_log="/var/log/${RC_SVCNAME}.log"
output_log="/var/log/${RC_SVCNAME}.log"

depend() {
  need net
}

start_pre() {
  checkpath --directory --owner $command_user:$command_user --mode 0755 /run/${RC_SVCNAME}
  checkpath --file --owner $command_user:$command_user --mode 0644 $error_log
}

that's what i did.

for the bucket and user management (including key), i summon weed admin and then access the admin UI locally via ssh port forwarding. I manage everything here.

as usual, i begin by syncing everything from the contabo storage via aws-cli before finally syncing back to our own bucket that's now in seaweedfs. things went surprisingly smooth.

except on one particular part

every object was inaccessible to anonymous access by default

to be frank, seaweedfs was a weird one. their admin web UI has no bucket specific policy.

even with their policies menu, it seems barely doing anything even after i apply it to the bucket owner. so i delete that.

when checking around, i found out that if you craft this public-policy.json:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::fedi/*"
    }
  ]
}

and then apply it via aws cli:

$ aws s3api put-bucket-policy --bucket fedi --policy file://public-policy.json

then it finally allows public object access. strange? i think so.

even after i apply the policy via aws-cli, the admin UI barely changed anything. no mention about bucket policy.

i guess that's it that i can tell you

still curious? i will give you the current server mood

+---------------+-------------+-----------+------+
| INSTANCE NAME | CPU TIME(S) |  MEMORY   | DISK |
+---------------+-------------+-----------+------+
| akkoma        | 5013.68     | 395.22MiB |      |
+---------------+-------------+-----------+------+
| mariadb       | 27.82       | 138.09MiB |      |
+---------------+-------------+-----------+------+
| mediaproxyoma | 409.46      | 86.34MiB  |      |
+---------------+-------------+-----------+------+
| pg            | 1945.70     | 193.20MiB |      |
+---------------+-------------+-----------+------+
| s3            | 41146.94    | 358.01MiB |      |
+---------------+-------------+-----------+------+
| toys          | 3739.83     | 86.48MiB  |      |
+---------------+-------------+-----------+------+
| varnish       | 59.29       | 43.76MiB  |      |
+---------------+-------------+-----------+------+
| writefreely   | 36.93       | 37.92MiB  |      |
+---------------+-------------+-----------+------+
Press 'd' + ENTER to change delay
Press 's' + ENTER to change sorting method
Press CTRL-C to exit

Delay: 10s
Sorting Method: Alphabetical

disk:

yonle@waltuh:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       296G   44G  240G  16% /

ram & cpu & load (htop): htop of waltuh.cyou

alright. that's it. bye.

 
Read more...

from Jim's Personal Blog

Well, I am back soon in early April this year because during my dormant time, I was thinking of rebranding and reshape the goal that I made before. Also, since I am leaving X, I feel that I don't think my existence there are made me famous or something. In another side, I am already give Elon Musk an big middle finger which that means I'll quit soon on his Twitter and finally I did it. Even I feel some disconnection at the time, I take the risk rather than my mental gonna be cooked and made me thinking of going to Psychiatric Hospital again.

I was thinking that I am in wrong community, and it's true. I was on wrong community at the time. As people who cannot made people entertain and I am always take anything seriously, I think I need to reshape anything and start again from zero. Also as disabled person, I need to stay away from society that harms me in any way that occurs. Also, using terms like “VTuber” or “VCaster”, it made me being minorities who have different vision. In global terms, “VTuber” are mostly entertain people by their live stream or any on-demand content that they made for entertainment purposes. Meanwhile, since I can't entertain people, I choose different path. Some few virtual content creator has taken different path and their goal seems running perfectly.

The challenge in Social way

Some of them could engage people to follow them are utilizing algorithm by active on centralized social network like Meta's Empire (Facebook, Instagram, etc.), Elon's Kingdom (X/Twitter), Bytedance's Garbage (TikTok), and more. Well, some how I couldn't survive there due to limitation that I am facing it. Also, when I realized that I have ADHD and since in real life also disconnected with current society, I don't know how to grow there and I have no idea.

Since then, I am trying to adapt with fediverse, which is combination of federated social network due to decentralization and another universe, finally I found the right place for myself to grow. Since they don't have an algorithm and any posts could be seen by everyone on fediverse, I am feel lucky that I found the right place. To understand the place that I feel like home, I learn about how ActivityPub protocol works and how difference between centralized and decentralized social network works, both from their infrastructure, ownership of the server, and social side that much different on biggest centralized social network.

I know fediverse currently have a few million active users and nearly 12 to 14 million people registered and a thousand of servers active on the network that no one owned it, but as minorities I feel safe and I could adapt there rather than on centralized social network where their black algorithm made me sick and insane. Got more following rather than followers? Not a problem. I want to build the relationship with people to people, not with masked bot.

How's my rebrand plan going on?

Yeah the older name “Everything with Jim” I decide to be an legal business name and I changed it as “Learn with Jim” where it have same goal as previous, but I decide to serve the content in English and publishing first on fediverse, before I share on YouTube for video content, Spotify and YT Music for Audiocast. I made the priorities where it called “fedi-first” where I publish the content on fediverse via Peertube under MakerTube instances for video and Funkwhale under Funkwhale Italia for audiocast (another name of podcast in audio format). Also, I want to share to anyone about FOSS, GNU/Linux, SysAdmin stuff, and anything that related to my hobbies.

That's it from me for now.

 
Read more...

from autumn

Written: April 25, 2024 Published: March 11, 2026

The Paradox of Ownership in the Digital Age:  How DMCA and DRM Limit Access and Undermine Preservation Efforts

 The digital age has ushered in a revolution in how we access and interact with information and culture. E-books, streaming services, and online libraries offer a seemingly boundless collection of knowledge and creative works at our fingertips. However, the concept of ownership in this new landscape remains shrouded in ambiguity. This essay explores how the Digital Millennium Copyright Act (DMCA) and Digital Rights Management (DRM) technologies, intended to protect copyright, create a system where consumers have limited control over their digital purchases. These limitations hinder fair use, restrict access for institutions like libraries and archives, and ultimately raise fundamental questions about true ownership in the digital age.

 Librarians and archivists stand as the guardians of cultural heritage, ensuring future generations have access to the knowledge and creative output of the past. However, the DMCA throws a wrench into their efforts. The act restricts activities like copying and sharing for educational purposes, which are crucial for libraries fulfilling their mission of disseminating information and fostering creativity. As highlighted by McDermott (2012), “complex copyright laws and a misunderstanding of fair use threaten libraries' ability to fulfill their mission of providing information access and fostering creativity”. Librarians often rely on fair use to share excerpts of copyrighted works for educational purposes, create digital copies for long-term preservation, or offer interlibrary loan services. The DMCA's restrictions on these activities create a chilling effect, hindering innovation and jeopardizing the long-term accessibility of knowledge.

 Imagine a scenario where a library owns a physical copy of a book out of print but still protected by copyright. Under the DMCA, the library may be unable to scan and offer a digital copy, even though this could significantly increase accessibility for patrons. This situation exemplifies the tension between copyright protection and the public's right to access information. Furthermore, the DMCA's limitations can restrict libraries from archiving digital materials altogether. A library may be hesitant to acquire e-books due to concerns about the long-term accessibility of the content, potentially impacting user access to valuable resources.

 The DMCA's impact extends beyond access limitations. The act fosters a culture of fear and uncertainty surrounding fair use. Libraries may be reluctant to engage in activities deemed potentially infringing due to the threat of costly litigation; hindering innovation and libraries' ability to effectively serve their communities in this digital age.

 The limitations imposed by the DMCA are further compounded by Digital Rights Management (DRM) technologies. DRM software encrypts content and restricts how users can access and utilize their digital purchases. While DRM serves the purpose of protecting copyrighted material from unauthorized copying and distribution, it also undermines the very notion of ownership in the digital sphere. When consumers purchase an e-book or song, they are essentially buying a license to access the work under certain conditions, not the work itself.

 Scharf (2010) aptly argues that DRM “prioritizes control over user rights”. This translates to limited user control over digital purchases. Imagine purchasing a digital book that you cannot lend to a friend or critically analyze online due to DRM restrictions. This scenario exemplifies how the current system prioritizes control by copyright holders over user rights. Furthermore, the ever-evolving nature of DRM software raises concerns about its long-term compatibility. The potential obsolescence of DRM could render previously purchased content inaccessible in the future, effectively negating any sense of ownership.

 Scharf (2010) further emphasizes the complex relationship between fair use and DRM. “Any attempt to encapsulate fair use provisions within DRM would have drawbacks for both right holders and users...” (p. 182). This quote highlights the inherent tension that exists between user rights and copyright holder control. Striking a balance between the two will be critical in moving forward.

 The limitations of DMCA and DRM extend beyond immediate user experience and have a profound impact on long-term preservation efforts. Libraries and archives face significant challenges in preserving digital content due to these restrictions. As Gasaway (2007) points out, “current limitations on copying and distribution don't translate well to digital media”.

 Unlike physical books, digital files can become inaccessible over time due to changes in file formats or software incompatibility. This presents a significant hurdle for long-term preservation. The focus on “preservation-only” exceptions with restricted access, as discussed in the article by Gasaway (2007), creates a paradox. Restricted access undermines the core purpose of preservation, which is to ensure future generations can access the information. One quote from the article emphasizes this concern: “One question is whether any user should have access to preservation only-copies. In fact, one could argue that the copy is no longer for preservation only if access is being granted to users” (Gasaway, 2007). This quote confirms the concern that restricted access to preserved works challenges the true purpose of preservation, which is to ensure future access. Additionally, the ever-evolving nature of digital formats and technology poses a challenge for long-term preservation.

 While the limitations of current copyright law and DRM pose significant challenges, emerging technologies like blockchain offer a potential solution for securing ownership of digital assets. Blockchain technology utilizes a distributed ledger system, where data is recorded across a network of computers. This creates an immutable record of ownership that is transparent and tamper-proof. Bodó et al. (2018) discusses the potential of blockchain for copyright protection, arguing that “Distributed ledgers are a general-purpose technology, meaning that they are freely configurable to any and every application. In theory, this makes it relatively easy to correspond the core building blocks of blockchain technology to fundamental concepts in copyright law.” (p.314). This further exemplifies how blockchain technology could potentially be a powerful tool for enforcing intellectual property rights through distributed ledgers.

 In theory, blockchain could be used to track ownership of digital content, ensuring creators receive appropriate compensation for their work. Additionally, blockchain could potentially facilitate secure access control for libraries and archives, allowing them to preserve digital materials while ensuring copyright compliance. However, it is important to acknowledge the limitations of blockchain technology in the context of digital preservation.

 Firstly, blockchain itself cannot store copious amounts of data efficiently. While ownership records could be stored on the blockchain, the actual content would likely need to be stored elsewhere. This raises questions about long-term accessibility and potential compatibility issues between storage solutions and future technologies. Secondly, integrating existing copyright laws with blockchain technology presents a complex challenge.

 Despite these limitations, blockchain offers a promising avenue for exploring new models of digital ownership and preservation. As Bodó et al. (2018) concludes, “ Still, should blockchain technology reach its market potential, it may have significant—perhaps transformative—impact on copyright in the digital environment. ” (p. 336). Collaboration between stakeholders – including content creators, copyright holders, technology companies, and libraries – will be crucial in determining how best to leverage blockchain for a more balanced digital ecosystem.

 The issue of digital ownership becomes even more complex when considering piracy. While piracy undoubtedly has negative consequences, the article by Kim et al. (2018) introduces a thought-provoking concept: the “invisible hand” of piracy. The authors argue that “When information goods are sold to consumers via a retailer, in certain situations, a moderate level of piracy seems to have a surprising positive impact on the profits of the manufacturer and the retailer while, at the same time, enhancing consumer welfare.” (Kim et al., 2018, pp. 1117). They explain how piracy can act as a “shadow competitor,” forcing manufacturers and retailers to lower prices or improve accessibility, potentially leading to a more efficient supply chain (Kim et al., 2018). This challenges the current legal framework and traditional views on ownership of digital goods. The concept of “owning” digital media becomes blurry when copying is near-effortless. Piracy can be seen as a symptom of a broken market, where consumers resort to piracy due to limited access or inflated costs. Perhaps a more nuanced approach to piracy is needed, considering the potential benefits and drawbacks in specific situations.

 In conclusion, the DMCA and DRM, while intended to protect copyright, create a system that undermines the concept of true ownership in the digital age. Consumers have limited control over their purchases, fair use is restricted, and long-term preservation of digital materials is hindered. Librarians and archivists, who play a crucial role in safeguarding cultural heritage, are particularly impacted by these limitations.

 Moving forward, a more balanced approach is necessary, one that respects copyright while ensuring fair use rights, promoting open access, and facilitating long-term preservation of our digital heritage. This could involve a few avenues:  • Revising DMCA exemptions for libraries and archives: Expanding exemptions to allow libraries to create digital copies for preservation purposes and offer interlibrary loan services for digital materials.

 • Exploring alternative preservation strategies: Investigating the potential of blockchain technology for secure ownership records while exploring complementary strategies for content preservation outside the blockchain ecosystem.

 • Encouraging collaboration between content creators, copyright holders, technology companies, and user groups to develop new models that prioritize both ownership and accessibility. This could involve exploring innovative licensing models that offer more user control and exploring new revenue streams for content creators in the digital age.

 • Re-evaluating the role of piracy: Considering the potential benefits and drawbacks of piracy in specific contexts and exploring strategies to address the underlying issues that lead to piracy, such as limited access or high costs.

By addressing these challenges, we can move towards a digital ecosystem that fosters creativity, ensures long-term access to information, and respects the rights of both creators and consumers. A system that strikes a balance between copyright protection and fair use is essential for a healthy digital environment where knowledge and culture can continue to thrive.

Some additional considerations we can take with us moving forward:  • Educating users about copyright law, fair use rights, and responsible digital citizenship can help foster a more balanced environment. Libraries and educational institutions can play a crucial role in these efforts.

 • Developing open access models that will facilitate open access initiatives that ensure the public has access to scholarly research and cultural heritage materials helping to democratize access to knowledge and encourage innovation.

 • Investing in robust and secure digital storage solutions for long-term preservation of digital materials. Collaboration between government agencies, libraries, and technology companies will be key in achieving these goals.

 Ultimately, the question of ownership in the digital age is a complex one with no easy answers. However, by fostering dialogue, exploring innovative solutions, and prioritizing both access and creator rights, we can create a more equitable and sustainable digital future.

 Capitalists: ...“You will own nothing and you will be happy.”  Everyone else: ...“Stand up me hearties, yo ho!”

Reference List

Bodó, A., et al. (2018). Copyright in the Blockchain Era: Enforcing Intellectual Property Rights Through Distributed Ledgers. Journal of Intellectual Property Law & Practice, 13(8), 741-750.

Gasaway, L. (2007). Digital Millennium Copyright Act and Library Preservation: A Paradox of Access and Control. Library Resources & Technical Services, 51(4), 1329-1337.

Kim, J., et al. (2018). The Invisible Hand of Piracy: How Moderate Levels of Piracy Can Benefit Businesses and Consumers. Journal of Marketing Research, 55(5), 1112-1132.

McDermott, S. (2012). The Chilling Effects of Copyright Law on Libraries and Archives. D-Lib Magazine, 18(5/6), 1-10.

Scharf, M. B. (2010). Fair Use in a Digital World: The Future of User Rights in the Information Society. Duke Law Journal, 60(2), 181-238.

 
Read more...

from autumn

Installation guide for DEBIAN 13 'Trixie', Wayland, x11, & nvidia:

(tip: run commands as root with su -)

Step 1. Add contrib & non-free in /etc/apt/sources.list

deb http://deb.debian.org/debian/ trixie main contrib non-free non-free-firmware

deb http://security.debian.org/debian-security/ trixie-security contrib non-free main non-free-firmware

...and often also for -updates:

deb http://deb.debian.org/debian/ trixie-updates non-free-firmware non-free contrib main

Example of modified sources.list:

#deb cdrom:[Debian GNU/Linux 13.3.0 _Trixie_ - Official amd64 DVD Binary-1 with firmware 20260110-11:00]/ trixie contrib main non-free-firmware

deb http://deb.debian.org/debian/ trixie main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian/ trixie main contrib non-free non-free-firmware

deb http://security.debian.org/debian-security/ trixie-security contrib non-free main non-free-firmware
deb-src http://security.debian.org/debian-security trixie-security contrib non-free main non-free-firmware

# trixie-updates, to get updates before a point release is made;
# see https://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_updates_and_backports
deb http://deb.debian.org/debian/ trixie-updates non-free-firmware non-free contrib main
deb-src http://deb.debian.org/debian/ trixie-updates non-free-firmware non-free contrib main

# This system was installed using removable media other than
# CD/DVD/BD (e.g. USB stick, SD card, ISO image file).
# The matching "deb cdrom" entries were disabled at the end
# of the installation process.
# For information about how to configure apt package sources,
# see the sources.list(5) manual.

Step 2. apt update

Step 3. apt install linux-headers-amd64

Step 4. apt install nvidia-kernel-dkms nvidia-driver firmware-misc-nonfree nvtop

Step 5. mokutil --import /var/lib/dkms/mok.pub

Step 6. when prompted, enter a password

Step 7. systemctl reboot

Step 8. On boot there will be a prompt to enroll the MOK, select yes; when asked, enter the password from step 6

Step 9. Enter TTY with CTRL+ALT+F3, enter username, password, sudo nano /etc/default/grub

(Tip: generally you may select x11 environment in the bottom left corner of the login screen if you want a graphical interface/ get stuck)

Step 10. Add nvidia_drm.modeset=1 as a boot option. This is achieved by appending it within the file /etc/default/grub to GRUBCMDLINELINUX_DEFAULT=“” without deleting other parameters.


Example of modified grub file:

# If you change this file or any /etc/default/grub.d/*.cfg file,
# run 'update-grub' afterwards to update /boot/grub/grub.cfg.
# For full documentation of the options in these files, see:
#   info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`( . /etc/os-release && echo ${NAME} )`
GRUB_CMDLINE_LINUX_DEFAULT="nvidia_drm.modeset=1 nvidia-drm.fbdev=1 quiet"
GRUB_CMDLINE_LINUX=""

# If your computer has multiple operating systems installed, then you
# probably want to run os-prober. However, if your computer is a host
# for guest OSes installed via LVM or raw disk devices, running
# os-prober can cause damage to those guest OSes as it mounts
# filesystems to look for things.
#GRUB_DISABLE_OS_PROBER=false

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE/GOP/UGA
# you can see them in real GRUB with the command `videoinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

Step 11. within home/username create the hidden directory .nvtmp

Step 12. set module options for the nvidia module variable at /etc/modprobe.d/nvidia-options.conf uncomment options nvidia-current NVreg_PreserveVideoMemoryAllocations=1 and add NVreg_TemporaryFilePath=~/.nvtmp to the same line

sudo nano /etc/modprobe.d/nvidia-options.conf


Example of modified nvidia-options.conf file:

#options nvidia-current NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=44 NVreg_DeviceFileMode=0660

# To grant performance counter access to unprivileged users, uncomment the following line:
#options nvidia-current NVreg_RestrictProfilingToAdminUsers=0

# Uncomment to enable this power management feature:
options nvidia-current NVreg_PreserveVideoMemoryAllocations=1 NVreg_TemporaryFilePath=~/.nvtmp

# Uncomment to enable this power management feature:
#options nvidia-current NVreg_EnableS0ixPowerManagement=1

Step 13. add your modules to the initramfs by editing /etc/initramfs-tools/modules and adding nvidia, nvidiadrm, nvidiauvm, and nvidia_modeset to MODULES.

sudo nano /etc/initramfs-tools/modules add MODULES="crc32c nvidia nvidia_drm nvidia_uvm nvidia_modeset"


Example of modified nvidia-options.conf file:

# List of modules that you want to include in your initramfs.
# They will be loaded at boot time in the order below.
#
# Syntax:  module_name [args ...]
#
# You must run update-initramfs(8) to effect this change.
#
# Examples:
#
# raid1
# sd_mod
crc32c
nvidia
nvidia_drm
nvidia_uvm
nvidia_modeset

Step 14. generate initramfs to add the changes you have made.

sudo update-initramfs -u -k all

Step 15. generate grub.cfg

sudo update-grub OR sudo grub-mkconfig -o /boot/grub/grub.cfg

Step 16. Before rebooting, enable scripts to allow wake from suspend/hibernate using systemd.

sudo systemctl enable nvidia-suspend.service nvidia-hibernate.service nvidia-resume.service

Step 17. systemctl reboot

Step 18. login

 
Read more...

from Yonle

swf died at around year 2020, but then came back with a brand new fresh player, known as ruffle that tackles many problems that the original player have.

there are attempts on get it working and make it alive again, platforms that has it back then like newgrounds and kongregate try exactly that by putting ruffle onto their website. that works, however most swfs are buried than good nowadays. it's not even as instant as it was back then.

halfne miku studio

a miku clock swf playing on a post

on fedi, ability to play swf are very limited to such numbers of instance, especially the frontend being used for this process. as of the time i'm writing this (March 9th 2026), the only frontend that can play swfs nowadays are: – Pleroma-FE, – Sharkey, a misskey fork, – Waltuh.cyou's akkoma-fe fork

although the first two frontend can did the job, It's apparently obvious for some potential security risk.

same context, window for hell.

if you check how these two frontends did their job, the flow is apparently looking like this:

frontend -> ruffle.js -> ruffle.wasm -> flash

this is fine by itself. ruffle already has it's own sandbox set up per SWF, however, we still have some potential security window opened. for such, even though we have disabled allowScriptAccess, there are still risks for example, if the ruffle itself is vulnerable (which is rare), then it could basically alter or read the page for it's own purpose.

and we don't want that potential risks to happen.

my “secured” approach

if you remember, akkoma and pleroma started playing with CSP to tackle vulnerabilities from a media (eg, image, video, etc). Doesn't sound making any sense, but you can't blame them for protecting everyone. Akkoma for example has began to encourage everyone to switch their media domain to not be the same as their instance/frontend domain.

for the cross-domain approach, though, i would say, it's smart. you see, by cross-domain, that means that if anything happens on something that's loaded from different domain, anything on the root won't really impact much. such as, cookies, localStorage, etc.

and so, i did the same on the flash function.

akkoma-fe actually have a code for loading SWF, but it's mostly unused as they didn't bundle ruffle in the end (they never use it), and so, the current code is here, but it didn't work. it's also as vulnerable as what i said above.

so, my approach is now this:

akkoma-fe (has csp) -> iframe (with sandbox) -> simpleWebFlashViewer (different domain, also has it's own csp) -> ruffle.js -> ruffle.wasm -> swf

you can say, this is way more unnecessary than done, i agree. however, it's on purpose as i was focusing a lot on the security, so i had to make it this way.

for this job alone, modifying akkoma-fe alone wasn't enough, as i also had to alter akkoma-be to adjust it's CSP header so the iframe would work.

simpleWebFlashViewer is a small thing that i made as the swf loader, paired with ruffle. so all akkoma-fe need to do is basically iframe and forget about it. i also documented how you should apply the CSP on simpleWebFlashViewer despite it being a simple static page. It's highly recommended that you do that

this approach, despite being overkill, actually able to destroy the ruffle player properly with no potential of memory leakage. for such, when you press the stop button, it removes the iframe from the frontend, immediately free the memory usage and CPU usage in a instant. as the result, when user face such problem, they do not really need to refresh the entire page.

as the result, my safety plug worked. The recent Windows XP tour SWF that i just uploaded tries to access intro.txt on a path where simpleWebFlashViewer is located, which is nonexistent.

even if it tries to access beyond that, i guess it's pretty impossible as there's only /media, /proxy, and /static/flashplayer (this is where simpleWebFlashViewer live). as the player & loader itself is on different subdomain (media.fedinet.waltuh.cyou) than the root (fedinet.waltuh.cyou). image

what if you're an admin of existing akkoma instance?

to be frank with you, i think you can just simply alter the CSP header to achieve the same thing as what my akkoma-BE did, and then simply use my akkoma-fe on your instance. You do not need to trust me because i highly doubt you would, but unless you're fine with experiments, then go on.

you then start serving simpleWebFlashViewer on different subdomain, say like, your own media domain at https://fedimedia.yourdomain.com/static/flashplayer/. if you're using caddy, then setting this would make sense:

  handle_path /static/flashplayer* {
    root /var/www/flashplayer/
    header "Cache-Control" "public, max-age=604800, immutable"
    header "Content-Security-Policy" "script-src 'self' 'wasm-unsafe-eval'; style-src 'self' 'unsafe-inline'; frame-ancestors 'self' https://fedinet.waltuh.cyou"
    file_server
  }

you will need to make client's browser to cache ruffle WASM aggressively because the runtime itself is sized at around 13 MB.

now, on your akkoma, copy instance/static/frontends/pleroma-fe/<ref>/static/config.json to instance/static/static/config.json, and edit the config here, adjust your flashplayer_loader to look like this:

{
  ...
  "flashplayer_loader": "https://fedimedia.yourdomain.com/static/flashplayer/"
}

conclusion

an anime girl with a wide mouth open on a fan

well, what do you think? although the solution that i made is a bit overkill, it turned out to ended up being better than worse. for such, arbitrary risks are somewhat became minimized as the ruffle itself ran on different page context now (iframe).

if you're curious, how about giving it a try in fedinet.waltuh.cyou? our instance is currently open for registration.

that's it from me.

you can also watch on the video that's showcasing this functionability.

happy hacking.

 
Read more...

from poes

Di MacOs Tahoe 26.3, GUI Forticlient VPN tidak bisa jalan dengan benar. Mungkin bug atau mungkin juga Forticlient terlalu tolol sehingga membuat aplikasi berjalan sangat lambat atau malah tidak bisa dipakai sama sekali.

Ane sudah reinstall berkali – kali namun tetap saja tidak bisa jalan dengan benar. Versi Forticlient yang ane pakai adalah legacy 6.0 yang harusnya aman dan lancar untuk dipakai. Apakah karena Tahoe 26.3? bisa jadi, oleh karena itu ane coba install versi terkini namun untuk bisa unduh si Fortinet minta data KYC. Bang*t ane ga mau share data pribadi dengan mereka.

Untungnya ada OpenFortiVPN yang ringan dan bisa jalan lancar dari terminal, ane install dari HomeBrew. File Readme.txt sudah sangat jelas dan semua happy ending dalam 3 menit kemudian.

 
Read more...