09 Feb 2021
We are all aware of the previous VBIOS fiascos the RX 5600 XT has been a part of. As a quick refresher, AMD released a VBIOS update for the RX 5600 XT to make it more competitive in response
to a price dip from NVIDIA on cards competing in the same price range. This update boosted clocks and power limits, pushing the silicon further than originally intended. Despite this, AMD still limited the performance of the RX 5600 XT via firmware and the driver, by imposing a hard limit of 1820 MHz on core clock, 1860 MHz on memory clock, and 180? Watts on ASIC power. Attempting to
push the card further (for example, with soft PowerPlay tables) would cause the card to instantly duck to the lowest clock speed.
This is because the RX 5600 XT actually has a lot of overclocking room, capable of competing with the RX 5700 and RX 5700 XT.
To go beyond the limits and break the 2 GHz barrier we need to bypass both the firmware and driver restrictions. Recently,
u/BITBY_RU released bonafide unlock VBIOSes (VBIOS’?) which were capable of bypassing
the firmware restrictions placed on this card, crediting “a man from Bulgaria”. Their intention is for cryptocurrency mining,
although they serve their purpose for boosting gaming performance as well.
DISCLAIMER: By following the instructions presented in this article, you are doing so at your own risk.
I am not liable for any damage you do to yourself or your hardware, and you are responsible for doing your own research.
Modifying the VBIOS of your graphics card is an inherently dangerous thing, and will void your warranty.
Before continuing, I highly recommend reading through
the GamersNexus guide to flashing VBIOS for RX 5600 XT, which contains useful information regarding recovery in case you brick your card.
Prerequisites
Creating the overdrive unlock VBIOS
The unlock VBIOSes provided by BITBY.RU come with default overdrive limits. Therefore, we will need to create an MPT profile with our desired
overdrive limits, and apply it to the unlock VBIOS.
MPT profile
First, open MPT and open the unlock VBIOS you downloaded earlier. You will want to modify the overdrive limits and power/voltage
to higher values so that you will be able to overclock it later. For reference, here are the values I chose:

The most notable changes are to the GFX Maximum Clock and Maximum Voltage
GFX. I do not recommend changing Power Limit GPU or TDC Limit GFX unless
you have the thermal headroom.
After you are done configuring, DO NOT click Write SPPT. Instead, click Save and save the MPT profile somewhere.
RBE modifications
Next, open RBE and load the unlock VBIOS. You will want to change the GPU ID to 5700XT, which will cause the driver
to think the card is an RX 5700 XT and bypass the driver restrictions. Next, navigate to the PowerPlay tab and
load the MPT profile you created earlier. Click Save and save this modified unlock VBIOS somewhere.

Flashing
You will now flash the modified unlock VBIOS onto your graphics card.
This is the most dangerous part of the process, make sure you have read my disclaimer above and have taken the necessary precautions.
First, open an administrator command prompt. You can do so by searching “cmd” in the start menu, right clicking it, and
choosing “Run as administrator”.

In the admin CMD, change directory to where you extracted ATIFLASH v2.93+ using cd /d <path>
.
For example, I extracted it to my desktop, so I would run cd /d "C:\Users\netdex\Desktop\293plus"
.
Next, copy your modified unlock VBIOS into this directory. Make sure there are not any spaces in the name.
For example, the contents of my directory now look like this:
C:\Users\netdex\Desktop\293plus>dir
Directory of C:\Users\netdex\Desktop\293plus
02/07/2021 12:36 PM <DIR> .
02/07/2021 12:36 PM <DIR> ..
02/07/2021 11:12 AM 377,344 amdvbflash.exe
02/07/2021 11:12 AM 12,048 atidgllk.sys
02/07/2021 11:12 AM 22,800 atikia64.sys
02/07/2021 11:12 AM 14,608 atillk64.sys
02/07/2021 11:12 AM 6,446 doc.txt
02/07/2021 11:12 AM 218 how-flash.txt
02/08/2021 09:00 AM 524,288 ulfakempt.rom
10 File(s) 2,532,546 bytes
2 Dir(s) 27,770,851,328 bytes free
… where ulfakempt.rom
is my modified unlock VBIOS file.
In the admin CMD, run amdvbflash -i
. You will get output like this:
C:\Users\netdex\Desktop\293plus>amdvbflash -i
adapter bn dn fn dID asic flash romsize test bios p/n
======= == == == ==== =============== ============== ======= ==== ==============
0 28 00 00 731F Navi10 W25Q80 100000 pass -
Under the adapter column is the GPU ID for each respective GPU in your system. Note which GPU ID
corresponds to your RX 5600 XT (shown as “Navi10” here).
Before we continue, we might want to make a backup of your current VBIOS in case something goes wrong.
You can do so by running amdvbflash -s <GPU ID> bios0.rom
in the admin CMD, which will save the current VBIOS into a file called bios0.rom
.
Now, we will proceed to actually flash the modified unlock VBIOS onto your graphics card. I’m sure I’ve already warned you
enough about the potential dangers.
In the admin CMD, run amdvbflash -f -p <GPU ID> <MODIFIED_UNLOCK_VBIOS.ROM>
. For example, I would run amdvbflash -f -p 0
ulfakempt.rom
. Then, wait for the flash to successfully finish, and restart
your computer when it tells you to. If everything went well, congrats! Your
RX 5600 XT is now fully unlocked, and you can now proceed with the usual
overclocking game to try and push your hardware to its limits.
I won’t go over exactly how to tune your overclock, because it’s outside the
scope of this article. Instead, I’ll tell you the overclock I was able to
achieve on my card with the unlock VBIOS: core clock 2050 MHz @ 1.1V, mem
clock 1800 MHz. The limiting factor for me is power/thermals, I currently have
the limit set to 200W but any higher leads to scary junction temps.

Additional reading
Tutorial how to unlock rx 5600 xt
Unlocked bios for Gigabyte RX 5600 XT 6GB GAMING OC 3 Fan GV-R56XTGAMING Samsung + Micron
u/BITBY_RU
Unlocked modified BIOS for AMD Radeon RX 5600 XT samsung + micron + hynix
18 Oct 2020
Out of the box, Pukiwiki’s text editor is a simple textarea. Of course, this
leaves much to be desired, such as:
- Auto-indentation
- Syntax highlighting
- Line numbering
- Find/replace
- Vim bindings (a must-have)
- etc.
We can use Ace to implement this functionality,
by embedding the text editor in place of the default textarea used for editing.
Tutorial
At the end of pukiwiki.skin.php
before the </body>
tag, insert
the following two script includes:
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/ace/1.4.4/ace.js"></script>
</body>
At the end of main.js
, add the following code at the end:
// adapted from https://stackoverflow.com/a/19513428
$(function () {
$('textarea[data-editor]').each(function () {
var textarea = $(this);
var mode = textarea.data('editor');
var editDiv = $('<div>', {
position: 'absolute',
width: textarea.width(),
height: textarea.height(),
'class': textarea.attr('class')
}).insertBefore(textarea);
textarea.css('display', 'none');
var editor = ace.edit(editDiv[0]);
editor.renderer.setShowGutter(textarea.data('gutter'));
editor.setReadOnly(textarea.data('readonly') === true);
editor.getSession().setValue(textarea.val());
editor.getSession().setMode("ace/mode/" + mode);
editor.setTheme("ace/theme/chrome");
if (textarea.data('grow') === true) {
editor.setOptions({
maxLines: 180
});
}
// copy back to textarea on form submit...
textarea.closest('form').submit(function () {
textarea.val(editor.getSession().getValue());
})
});
});
In html.php
, change the msg textarea to this:
<textarea name="msg" data-editor="c_cpp" data-gutter="true" cols=90 style="height:80vh">$s_postdata</textarea>
Additionally, since we already have Ace loaded, we can use it for syntax highlighting.
The version of Ace.js that this site uses has a special Pukiwiki
syntax highlighter module which implements rudimentary support for
Pukiwiki markup. My fork is available on
GitHub.
I got a bit lazy trying to translate the context free grammar into regexes, so
the result is a bit incomplete.
20 Jul 2020
A part of my workflow involves using restic to backup data
onto an external hard drive. After a migration, I needed to perform a rather
large ingestion into my restic repo (about 500GB). However, in the middle
of the backup my drive suddenly disappeared without a trace. No drive letter,
not even in device manager.
Thinking it was just a fluke, I unplugged the drive and plugged it back in.
It appeared normally in Explorer, and worked just fine. Ran chkdsk
;
on it and it returned no errors. Weird. Ran the backup again, starting from
the very beginning. The drive disappears again half an hour later.
Something was definitely wrong, so I looked at Event Viewer to see if
Windows agreed with me. I saw a number of worrying warnings…

Namely, UASPStor: Reset to device, \Device\RaidPort3, was issued.
Burn these words into your mind. Precursory search results pointed
towards hardware failure or system instability. I didn’t buy it.
I shucked the drive out of the external enclosure, connected it directly
via a SATA interface, and ran the backup again. After about half an hour,
the backup is still running. But the speed dropped to a whopping 1 MB/s.
And the average access latency was into the minutes.
I stopped the backup again, frustrated. In complete silence, I tried to
come up with any fathomable reason why I could not backup my files. That was
when I heard a peculiar noise. The clickety-clackety sound my hard drive makes
when it moves to do writes.
I open task manager, and check the current write speed to the drive. Zero.
It was at this moment that everything became abundantly clear.
My external drive probably uses Shingled Magnetic Recording
(DM-SMR specifically).
One fantastic aspect of SMR drives is that they have a much better data density
than conventional PMR/CMR drives. One less fantastic aspect of SMR drives is
that their write speeds are complete garbage.
SMR drives abuse the fact that you can read from tracks much narrower than
you can write, by stacking them on top of each other like the shingles on a roof.
This makes them great for write-once, read-many applications, since read performance
is not affected. However, if you want to write even a single sector to the drive,
you will have to read and write the entire zone of tracks that overlap.
Your 4K write just turned into a 256MB one.
Device-managed SMR drives (DM-SMR) make the fact that they are SMR drives
completely transparent to the host system. That is, they expose a normal
interface and handle all the additional work that SMR drives need to do. Since
SMR drives have amazingly awful write performance, many levels of caching
are usually added to improve performance. For example, a DM-SMR drive may have
a fairly large PMR cache, and a relatively small DRAM cache.
The drive will try to optimize the use of the PMR region and the DRAM cache
to avoid having SMR writes in the critical path. When are idling, the
drive controller will slowly siphon data from the PMR cache to the SMR
drive, freeing up PMR cache space.
What happens when the PMR cache is full? Writes go directly to the SMR drive.
You know what SMR drives are really bad at? Writing. You know what
SMR drives are even worse at doing? Random writing. You know what restic
likes to do alot? Yeah. The drive shits the bed so hard that the UASP controller
thinks the drive is literally gone.
Who’s fault is this?
- The UASPStor driver probably needs to handle timeouts better than it does right now, since apparently
minutes of access latency are a real use case.
- Drive manufacturers need to label their disks with the technology they use.
- Software writers need to optimize for what spinning disks do great: sequential reads and writes.
Let block sizes be configurable, because sometimes I want to waste a bit of space if it means saving hours of time.
</rant>
20 Jul 2020
I recently acquired a Gigabyte RX 5600 XT Gaming OC, hoping that the widely
reported driver issues were an overreaction from a vocal minority.
Unfortunately, this was not the case - I encountered several stability issues
throughout my first few weeks of using this card. This post goes over the
troubleshooting process I went through while trying to get this card to work.
As a foreword, nobody should ever go through this much effort to get a
product they purchased to work properly. If it wasn’t for Canada Computers’
abysmal return policy, I would have jumped ship the instant my screen turned
green.
Symptoms
The most common issue I experienced with this card is the infamous black screen,
characterized by the following symptoms:
- The screen suddenly turning black while other computer functionality
remains (such as sound)
- Sometimes, the cursor may still work or flicker
- On screens connected via HDMI, the screen may turn green instead
- After a few seconds, the entire system will stop responding, probably
due to TDR (increasing TdrDelay causes this period of “functionality” to last longer)
- Almost always happens while gaming, regardless of whether
the game was graphically intensive or not
- Has occurred randomly during other tasks, such as unlocking my
workstation or watching a video in MPC-HC
Diagnosis
System Memory Timings
When I first got this card, I happened to have slightly unstable RAM timings for my system memory
(about 1 bit flip after 8 passes of memtest). The memory was running at 3000 MT/s 14-16-16-16-32.
I have since then loosened the memory timings to 2933 MT/s 16-17-17-17-35.
Other people in the community have reported similar issues.
It seems these cards are fairly sensitive to bad memory timings, since I did
not have any issues with these memory timings before with any other cards.
BIOS
You are probably already aware of the BIOS fiasco AMD sprung on its partners
in order to make the RX 5600 XT more competitive. Gigabyte released two versions,
F2 (stability) and FA0 (performance). I have not been able to achieve system
stability with FA0, despite the only differences between them being VRAM speeds and
power limits. You can replicate the FA0 performance solely with the F2 bios and Wattman.
Update: As of now, the FA0 VBIOS is no longer available on Gigabyte’s website.
It turns out that there are two different factory VBIOSes from the factory, F1 and F60.
It is worth noting that F1 and F60 share all parameters inspectable
by MorePowerTool. Despite this, Gigabyte notes that the VBIOS upgrade path
to F2 and FA0 is only supported for cards that come with the F1 VBIOS.
That is, cards that come with F60 are inherently less overclockable than
cards that come with F1, despite being sold as the same product.
After reducing my clock speeds to F60 defaults while having F2 flashed,
I no longer had any black screen issues.
Update: Gigabyte released three new VBIOS versions: F61, F3 and FA1 - one
corresponding to each previous version. I could not identify any difference in
key parameters between these versions. However, I am able to run F61
without any stability issues (even overclocked, nonetheless).
Fan Curves
The default fan curves are inadequate, and will almost definitely lead to
thermal throttling. The throttling did not seem to be aggressive enough to
avoid system instability. I ended up ramping up the fan curve by 20% PWM.
Chipset Drivers
I had manufacturer chipset drivers, which were slightly outdated
compared to the chipset drivers directly from AMD.
Adrenalin 2020
Some people have seen success from installing the display driver without the Adrenalin software.
This would imply that there is some component of Adrenalin which causes these black screen
issues. Given the amount of introspection into games that Adrenalin does (counting average FPS,
number of hours played, etc.), it would not be surprising if some of this
functionality is broken.
This would also explain why I have only experienced these black screens
while playing games. All 50 or so of the black screens I have encountered were
while playing a game.
Without Adrenalin, you can still apply overclocks with tools such as
OverdriveNTool. I personally had issues with tools like MSI Afterburner,
as they do not give enough control over voltage curves.
The Verdict
My leading theory: the card I purchased came with the F60 VBIOS.
Flashing F2 was thus unsupported, which led to the instability. The
instability was caused by either power limits, memory frequency,
or some VBIOS firmware that is not compatible with cards that
come with F60.
I am not hopeful that a future VBIOS with be released as an
upgrade path from F60 (i.e. F61), since it is extremely likely that
cards that come with F60 are just poorer bins which are not capable
of 14 Gbps anyways.
When purchasing this card, there is no way for a consumer to tell
whether they are getting a card with F60 or F1 VBIOS. There are
significant performance differences between these cards. Gigabyte does
not make this clear, with AMD’s official site even listing this card
as upgradable to 14 Gbps.
Do not buy this card.
Update: Given the new VBIOS updates, this card is now perfectly fine. I
wish Gigabyte gave some sort of statement or postponed release of the card
before the new VBIOS was developed, as it would have saved me many
hours of headaches.
13 Apr 2020
This post serves as a log of some of the hoops I had to jump through in
order to get LineageOS 17.1 to build on ArchLinux, as a reminder to myself
in case I need to do this again in the future. As a result, it is rather brief
and incomplete. Consider filling in the gaps with the official build
documentation from the LineageOS Wiki.
This guide assumes working knowledge of ArchLinux and Android development,
but makes an effort to link to relevant reading.
Prerequisites
Packages
Required for build
- repo
- [[lineageos-devel:https://aur.archlinux.org/packages/lineageos-devel/]]
- [[multilib-devel:https://wiki.archlinux.org/index.php/Official_repositories#multilib]]
- base-devel
- ttf-dejavu (Fun fact: It’s used to generate the PNG image for the LOS recovery splash screen)
- lib32-ncurses5-compat-libs
- …
Recommended
- android-tools (platform-tools)
- android-udev (udev rules for Android ADB)
System considerations
In my experience, the following system specifications are a bare minimum
for an enjoyable development experience:
- A 64-bit multiscalar processor from this decade
- 200GB of solid state storage
- '’At least’’ 16 GB of RAM, at the bare minimum 8 GB with some tweaks
- A relatively cool climate and a nearby window to vent heat
Making it work
There are a number of optimizations which can be performed to ‘'’make it work’’’
despite not having the best computer or internet connection:
Reduce simultaneous jobs
For repo sync
or brunch
consider using fewer simultaneous
jobs (via the -j
flag). This has the effect of:
- Using less memory at once
- Using less network bandwidth at once (for repo)
- Using less compute resources at once
- Making your build take proportionally longer
Build components individually
For systems with less than 16 GB of RAM, there are some parts of the build
which will OOM (I’m looking at you, metalava). These components can be
selectively built individually, to reduce the number of tasks your computer
is processing at once (see the top of &pre(build/envsetup.sh); for make commands).
Use swap and zswap
You should probably use
swap on relatively fast storage
in order to handle sudden spikes in memory usage.
This includes enabling
zswap. Many build steps allocate
large heaps or spawn many JVMs which fill all your resident memory. Memory
compression is surprisingly effective at handling these bad actors.
Without sufficient RAM and swap/zswap disabled, you will probably OOM while generating
documentation with metalava.
Caching
For multiple builds, consider enabling ccache with or without compression
to reduce build time for subsequent builds.
export USE_CCACHE=1
export CCACHE_COMPRESS=1
ccache -M 20G
Build
Acquiring sources and binaries
Run the following repo commands to download the sources from a hodgepodge of
repositories scattered across the internet.
repo init -u https://github.com/LineageOS/android.git -b lineage-17.1
repo sync
Consider using fewer simultaneous jobs if you
have a poor internet connection.
Make sure you keep the -c
flag, which tells repo to only download the
current branch. Otherwise, you will have a bad time.
Proprietary blobs
You will need proprietary blobs from your device vendor. These can be
extracted from a working device, ripped from an image, or obtained via unofficial means.
If you end up getting blobs from a repository, consider adding them to
your local manifests. This will make synchronization of your blobs work with repo.
For example, my .repo/local_manifests/roomservice.xml
looks like
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<project path="device/essential/mata" remote="github" name="LineageOS/android_device_essential_mata" />
<project path="kernel/essential/msm8998" remote="github" name="LineageOS/android_kernel_essential_msm8998" />
<project path="vendor/essential" remote="github" name="TheMuppets/proprietary_vendor_essential" />
</manifest>
Device specific sources
To get vendor and device trees for your specific device, you
will need to have breakfast. This will fail if your proprietary blobs are
missing.
source build/envsetup.sh
breakfast <DEVICE_CODENAME>
Building
Run the following commands to create an unsigned build:
croot
brunch <DEVICE_CODENAME>
Signed builds can be created via a different process, which I will
not get into detail.
Your build artifacts will be in the &pre($OUT); directory.
Troubleshooting
Something will probably break while you are trying to build. Here’s
what broke for me, and how I tried to fix it.
- Process is randomly terminated while building
[ 97% 99068/101261] //frameworks/base:system-api-stubs-docs Metalava [common]
It was probably killed by the OOM killer. Watch your available memory
and process oom scores while building. Consider using the memory
techniques mentioned above. If all else fails, try tweaking various
environment variables to try to lower memory usage (e.g. Java heap size,
cache sizes etc.)
ERROR: Failed to run command '['simg2img', '/tmp/targetfiles-oa64tct3/IMAGES/system.img', '/tmp/targetfiles-oa64tct3/IMAGES/unsparse_system.img']' (exit code 255):
Failed to read sparse file
- platform_build/build_image uses /tmp to unsparse image files. On most systems,
/tmp is backed by RAM, with a maximum size of half the available RAM. The random
file used by the tool is generated with &pre(mktemp);, so use the &pre(TMPDIR);
envvar to set it to somewhere else.
error: vendor/lineage/build/soong/Android.bp:30:8: module "generated_kernel_includes": cmd: unknown variable '$(PATH_OVERRIDE_SOONG)'
Have you ever found an article in a foreign language (navigate at your own risk) that seems to describe the very issue you are having, with
no response. Well I have.