Getting Started Disassembling Nikon Firmware

Download the firmware you are wanting to improve. Steve Goossens has a really nice patching guide that has links to the Nikon download archives, if you are looking for older firmware.

Open the firmware in the Nikon-Patch Local UI tool (It is a C# project, so you will need a windows pc and Visual Studio 2019 Community to run it) and decode/exact the firmware

The main UI firmware is the B firmware, we will open that in Ghidra

So demonstrating with the D3300 1.02 firmware, we get the files a860_010101.bin and b860_102.bin from the decode step.

So create a new Ghidra project, then import the B firmware

Open the B firmware
Open the B firmware

Then select the ARM v7 little endian

select the ARM v7 little endian
select the ARM v7 little endian

Finding the firmware offset

For now ignore the Options, but we will come back and load the file at the correct offset later. Also for the next couple of step don’t save anything, as we will be deleting and re-importing files as we work out, where the files should be loaded.

Once we open the file, you will be prompted to “Auto disassemble this file” say No you that, and press d key on the first line

this will look like:

    //
    // ram 
    // ram:00000000-ram:016bff89
    //
         assume spsr = 0x0  (Default)
                         Reset                                           XREF[1]:     Entry Point(*)  
    00000000 18 f0 9f e5     ldr        pc=>LAB_90352fe0,[DAT_00000020]                  = 90352FE0h
Disassemble the first line
Disassemble the first line

This is noting the B firware reset jump address is loaded from 0x20 in the firmware, which is address 0x90352FE0, and Nikon B firmwares follow a pattern of being at 0xNN020000 thus this model camera is loaded at 0x90020000

Now we have the correct offset, close the CodeBrowser and don’t save

Close and don't save
Close and don't save

Deleted the firmware from the project, and confirm yes to permanently deleting

Delete firmware from project
Delete firmware from project

and reimport, but this time set the memory offset

Set ROM offset
Set ROM offset

Now we open the CodeBrower again, and again say no to the Analyze option. Because this time we are wanting to learn the address the itron kernel gets loaded to, and if we have that memory mapped before the auto magic process begins, there is a lot of errors avoided.

Analyze prompt, it's still No
Analyze prompt, it's still No

So here we are going to decode the first jump, and follow that twice, and then we will be in the reset function (at 0x90352fe8 hit f to mark that as a function) and then scroll there are some sections where a repeating pattern occurs, that is a serries of MemCopy operations.

The MemCopy functions usage look like this
The MemCopy functions usage look like this

clicking on the addres 9035329f we see some code that looks like this. It’s loading 48 bytes into all the spare registers via the LDMIA instruction and then writing them to the destination. This is the memcopy we are looking for.

and MemcCopy function looks like this
and MemcCopy function looks like this

So now we can right click on the function in the Disassembler window and Edit Function Signature

So lets markup that MemCopy
So lets markup that MemCopy

and then the jumping back to the reset (Alt+LeftArrow steps back) function will look like:

FUN_903533a0(PTR_DAT_903531cc,PTR_DAT_903531d0,0xae000000);
FUN_90020098();
memcopy((undefined4 *)PTR_LAB_903531d4,DAT_903531d8,DAT_903531dc + 3U & 0xfffffffc);
memcopy((undefined4 *)PTR_LAB_903531e0,DAT_903531e4,DAT_903531e8 + 3U & 0xfffffffc);
memcopy((undefined4 *)PTR_LAB_903531ec,DAT_903531f0,DAT_903531f4 + 3U & 0xfffffffc);
memcopy((undefined4 *)PTR_DAT_903531f8,DAT_903531fc,DAT_90353200 + 3U & 0xfffffffc);
memcopy((undefined4 *)PTR_DAT_90353204,DAT_90353208,DAT_9035320c + 3U & 0xfffffffc);
memcopy((undefined4 *)PTR_DAT_90353210,DAT_90353214,DAT_90353218 + 3U & 0xfffffffc);
FUN_900200a4();
dst = (undefined4 *)((DAT_9035321c & 0xfff00000) + 0x100000);

which looks strange, because the destination pointers are not followed. But we can alter the memory mapping to drop Write from the ROM area

Remove write from ROM to solve
Remove write from ROM to solve

and the code then becomes:

memcopy((undefined4 *)&LAB_9158a358,(undefined4 *)0x10000000,0x3b5b4);
memcopy((undefined4 *)&LAB_915c590c,(undefined4 *)0x1003b5b4,0x179c);
memcopy((undefined4 *)&LAB_915c590c,(undefined4 *)0x1003b5b4,0);
memcopy((undefined4 *)&DAT_916a73a0,(undefined4 *)0x1011d420,0x28b3c);
memcopy((undefined4 *)&DAT_916a73a0,(undefined4 *)0x1011d420,0);
memcopy((undefined4 *)&DAT_916a739c,(undefined4 *)0x1011d03c,4);

Now we know the offsets we need to add as memory maps, on the very screen we used to fix the ROM value, the “Memory Map” screen. Really only those three blocks are needed to be mapped, these are RAM area’s and there is some general RAM that needs to be identified, but for now this is the start we need.

Which is exactly what I have been noting in the Firmware Notes in the code repo, when I remember.

So now we can delete all this again, and load the code base firmware at the correct location, and add the memory maps, and then press Auto Analyze and walk away for a cup of tea.

Adding the Kernel memory maps

So using the find or Firmware Notes values we want to add the three blocks of memory for the Code, Data, and ? third data segments:

Adding RAM1 memory map, make sure you tick execute (x) and select Byte Mapped
Adding RAM1 memory map, make sure you tick execute (x) and select Byte Mapped

repeat for RAM2 and RAM3

and you will end up with

Kernel memory mapped
Kernel memory mapped

Now it’s time to Auto Analyze

Now auto analyze
Now auto analyze

Going forward from here

There are number of landmarks to look out for, there is a chunk of kernel debugging, that you can find the strings for, and then find the stubbed print finctions, thus find all calls, those give a lot of details, there is a small amount of RTTI information, there are some documented protocols like the TIFF/JPG header writing code, the PTP handlers are driven by tables, so are quite easy to find, the EyeFi code can be found quickly.

And then there is the method of loading other camera models firmware, side-by-side, and looking for the addresses of existing patches and finding the related code.

I intend to write more on most of these things, but it also has taken me 4+ years to write this up, so it’s not going to happen overnight…

It helps to plug it in

So at some point, I came across this picture, I feel it was some quality corporate compliance training. Anyways, it had these great chaps, collaborating, and synergising over stuff on those computers screens..

Let us collaborate and synergise, yes lets
Let us collaborate and synergise, yes lets

Only problems, is those devices are very much “not plugged in” now I know, cables are ugly, and staged shots look best when they are not “real” I get it. But it also irks me. Maybe this is just be saying “I still have not grown up enough to not talking about meaningless things that irk me”.. yes that might be the only value. Still not there yet.

No cables, I checked
No cables, I checked

Google image search shows it used in a number of places so it seems like it’s from some Stock photo library.

Goodbye SageTV thanks for all good times

The time has come that the trusty SageTV has been turned off.

This year has not been the best for the SageTV and to be honest, all the problems where between my server (Win10) and the HDHomeRun. Back in September 2020, after a Windows Update, the SageTV process failed to connect to the HDHomeRun. After much fluffing around I installed OpenDCT and got that working, and SageTV could see that.

The a couple of weeks back both Sage/OpenDCT and EPG Collector stopped being able to see my HDHomeRun. Again more fluffing around, installing updates, I got it so the HDHomeRun tools could see the channels, and turn to stuff, and a little while later, so could SageTV see live TV, but EPG Collector never had any luck.

Then Michaela called it and said, the streaming apps where good enough, and it could be shut down. At one level I was super glad, to get to stop thinking about what to try next, or that I really should get back on to that task. And another level, super said to give up on the system. I had been a really neat tool for the last 10 years.

Uninstalling SageTV
Uninstalling SageTV

So other neat things, are I have freed up a chunk of disk space. And deleted multiple copies of installations, that were keep to debug changes. And deleted most the code changes I had done to SageTV, and EPG Collector, and tweaks to OpenDCT (I think those were submitted). Anyways. Letting code changes go. Feels really relaxing, permission to just forget those TODO’s.

Uninstalling 'XMLTV Importer for SageTV'
Uninstalling 'XMLTV Importer for SageTV'

So now I have no Scheduled tasks, and can allow the PC to turn off at night.. so that all feels better, going forward.

I really value the community efforts of: SageTV, EPG Collector, Comskip, OpenDCT, XMLTV Importer for SageTV

Setting up new server notes

So my friends suggested AWS S3 as a good way to host static blog site, and I had been meaning to do this transition for a few years, but just never put to high on the to-do list.

I decided to use S3 & CloudFront in the REST API endpoint as the origin, with access restricted by an OAI method. I also choose to make CloudFront accept only HTTPS, I was moving to the future..

I then hunted through backups looking for all the files, as I was super lazy and only had sub-folders of the site save on my current PC, and then uploaded all that to s3. I turned on CloudFront via the great temp domain: https://dyk3v11u6y5ii.cloudfront.net/nikon-patch/nikon-patch.html which still works!

Once the domain transferred, had more days while things propagated, before Route53 would let me add a cert, and after adding that is ages ~hour to propagate, so you can have the auto-fill in CloudFront to select the alias domain. All fun stuff, trying to working why it was not working..

Then I setup Lambda@Edge script to auto convert the blog/postname/ requests into /blog/postname/index.html which is how the files actually are in S3, not said so far, but AWS has really good documentation, that major flaw is you find the wrong doc’s when googling, but once you are seeing the current/correct Lambda@Edge doc’s the UI/Code makes sense. The Doc‘s I followed, and the script I more or less used.

I installed a 403 error handler, that loads my 404-page.html page, which has a Google Analytics Event so I can see which pages are getting 404.

The 403: Forbidden is the response S3 gives for files not found
The 403: Forbidden is the response S3 gives for files not found

I used to have a rather large collection of url rewrites on the Windows server, so once this was done my base page hits was more aligned with how it used to be:

Black arrow site down, Red arrow domain transferred, Green arrow 404 page added
Black arrow site down, Red arrow domain transferred, Green arrow 404 page added

Once I had the 404 page, I found my own blog posts where the major source of bad links, joys of a Window webhost, all files are equal.

404 Pages Highscore, the dip on Jul 5th is when I altered most my internal pages to be correct
404 Pages Highscore, the dip on Jul 5th is when I altered most my internal pages to be correct

I did a number of alteration to Hexo modules so the blog behaves as I wanted, and a large amount of mass editing of old posts, but those can have their own posts, in the future.

But so far, the monthly bills ignoring the domain transfer have all been sub $1 nzd/month (and $0.50 is the fixed Route 53 charge), so it’s a rather pleasing savings.

Hosting company appears hijacked

So on 30th of May 2021 I woke to some helpful emails, and an issue posted on github saying “your domain has expired”. Which was super strange, given I had a couple of years of domain registration to go..

If you went to any page on simeonpilgrim.com sure enough it appeared “like the domain was expired”

domain expired, click here to renew it.
domain expired, click here to renew it.

I logged into the hosting company “kiwigeeks.net” customer service area, and I could see I had a couple years of domain left, but when I tried log into the HELM interface on the servers I could not log in. Back to the customer service area, and I “rest” my HELM passwords, but those didn’t work.

It was starting to look like someone had hijacked the Window Server 2008 boxes (strange that).

The DNS records in the customer service area still showed correct readings, but using Googles Dig tool showed the where some other box other that the correct hosts, ns1.techservers.net & ns2.techservers.net if you are a previous customer of KiwiHosting and using their Windows hosting $10nzd/mth then you stuff might also be dead.

Anyways I posts a request for help:

Things have gone wrong, please help!
Things have gone wrong, please help!

But after a couple of hours of not being able to regain control of the host, I decided it was time to leave KiwiGeeks. I should have left WikiGeeks years ago, like when the stuffed up my domain renewal

So I pinged my friends on Discord, and the suggested solution was AWS S3 hosting (which worked well as I had already moved to static content via Hexo)

So I spent some hours reading, how to use S3, CloudFront to host a “static blog”. Notes on setting up new server

I kicked off a domain transfer from KiwiGeeks to AWS Route53 (it was a toss-up between AWS DNS services and Google, the Discord team where split of this, but in the end the costs are about the same but all in one shop, felt like it might work out better for me.). It turned out “lucky” that the domain transfer “code” functionality still worked, and then I got to wait. I was “super” excited to receive the automated email, from KiwiGeeks saying they would sit on the transfer for 5 days, for safety reason. I mean that is sensible, except this was an exception.. or maybe not, not sure I would have wanted the hijackers to have taken my account “super fast”. Anyways at the time it was super slow, and now it’s just done.

Anyways, KiwiGeeks.net support never got back to me, and I updated my request to “close my account”, and they still ignored me. So a large part of the this post is to demonstrate that I tried with them, and that I will not “pay” any outstanding debt if they are so stupid to try action it.

Why you ghosting me, it's over
Why you ghosting me, it's over
The transferred domain, sure lets pretend it's still under our control
The transferred domain, sure lets pretend it's still under our control

Hello World

Welcome to Hexo! This is your very first post.

Well first post it is not, but sure first Hexo post. Well my old Wordpress blog was getting rather old. And I made a couple of mistakes in updating the server, which meant I’ve not been able to post for over a year.

Not that I have posted anything in 2016, but things have moved on in many dimensions, and at time I’d like to be able to document stuff (for myself) again.

Given my host cannot “upgrade php on the server” a wtf in of itself, to a version that support the current wordpress I need to get off it. So I almost randomly choose Hexo. And have bene altering the defaults somewhat, imported the old blog (and comment) and done a lot of editing to the formatting (with heaps more to go).

So now this needs some testing to see how it all goes live in production..

Anyways there is not comments - email me at simeon.pilgrim@gmail.com and I mainly add your comment if it’s make sense..

Snowflake data warehouse date range based joins 45x speedup

I have been using http://www.snowflake.net for a new data processing at work for a few months, and it’s just amazing, to be able to run large queries over large data-sets, and the ability to increase the cluster size, when doing development work to get faster turnarounds, that are not impacting the production cluster, brilliant.

One of the thing I have noticed is slower than I would like is joins based on tableA.time being inside a time-range of tableB.start and tableB.end when the time period being is in the months not days.

So the pattern mapping a value from TABLE_I onto all rows in the time span (not including the end)

CREATE OR REPLACE TEMPORARY TABLE WORKING_B AS
SELECT tp.u_id, tp.time, i.value
FROM TABLE_P tp
JOIN TABLE_I i ON tp.u_id = i.u_id
AND tp.time >= i.start_time AND tp.time < i.end_time;

For one set of data spanning 3 months the above takes 45 minutes on a small cluster for TABLE_P 65M/TABLE_I 10M rows.
Where-as for a similar set of 4 days, and ~45M rows this takes 30 seconds.

So I add some TO_DATE(time), TO_DATE(start_time) columns to the two tables, and then added AND tp.time_day = i.start_time_day and the first query went to ~60 seconds. But I was missing a few million rows as my time ranges span multiple days…

So I did many things that didn’t work (like trying to use a GENERATOR with dynamic input) and settled on a simple solution

CREATE OR REPLACE TABLE TEN_YEARS_OF_DAYS(date) AS
SELECT DATEADD(day, (rn-1),DateADD(months,4,DATEADD(years,-10,CURRENT_DATE))) FROM (
SELECT row_number() over(order by 1) as rn
FROM TABLE(GENERATOR(rowCount => 365*10)) v);

CREATE OR REPLACE FUNCTION get_dates_for_N_days ( start_date DATE, end_date DATE )
RETURNS TABLE (date DATE)
AS 'SELECT date FROM TEN_YEARS_OF_DAYS WHERE date BETWEEN start_date AND end_date';

so this creates a table with ten years of data (moved 4 months into the future) and a table function that selects the rows from it, so I can do a lateral join on that function

CREATE OR REPLACE TEMPORARY TABLE TABLE_I_B AS
SELECT e.*, t.date as range_day_part
FROM TABLE_I e, LATERAL get_dates_for_N_days(TO_DATE(e.start_time), TO_DATE(e.end_time)) t;

So the above code creates another temp table with a row per table B record with every Day is a duplicate row, now we have more rows in the seconds table, but we can do a date based match to speedup the query.

CREATE OR REPLACE TEMPORARY TABLE WORKING_B_B AS
SELECT tp.u_id, tp.time, i.value
FROM TABLE_P_B tp
JOIN TABLE_I_B i
ON tp.u_id = i.u_id AND tp.time_day = i.range_day_part
AND tp.time >= i.start_time AND tp.time < i.end_time;

This code runs in 60 seconds and gives the same results as the 45 minute code.

Things to note, putting LATERAL table joins on a selects with CTE’s presently breaks the SQL parser, in fact even nested selects and LATERAL don’t mix, thus the extra tables with _B etc. Also CTE’s make life so much easier, but as you start joining to them a lot, performance slips, I have found where I do a complex join is a good time to output to a temporary table, and the performance again is crazy…

Snowflake is just a beast. I have a man crush…

The Wrong Kind of Paranoia

James has a new post called http://prog21.dadgum.com/206.html in which he wonders if lots of the idioms like const, and static or sealed classes are them to allow developers to protect themselves from themselves. I would have commented on his blog, but he doesn’t like comments, which is a completely sane stance to have, heck the only comments I get here are spam.

So I had always thought those idioms where there to allow complier optimizations because there was extra information that was not tracked on older compilers, or to make the code cleaner.

const variables are to a avoid macro’s “true evil” and to keep the type information.
static allowed to keep the symbol table size down
sealed allowed real optimization, as functions will never to replaced.

And these were needed, so the new language could perform “better” at some key benchmark used to once again prove that poor “real world” problems in assembly, C are faster than in new language X. Thus all problems should use C/asm.

Goodbye Google Code, Hello GitHub

Well with the news Google is closing down Google Code, I’ve moved the two project I am responsible for to GitHub.

so https://code.google.com/p/coab/ is now at https://github.com/simeonpilgrim/coab

and https://code.google.com/p/nikon-firmware-tools/ is now at https://github.com/simeonpilgrim/nikon-firmware-tools

I’ve swapped over to the git tools, which I dislike compared to the hg tools, but I couldn’t get hg-git to play nice with github, so have moved on.

Tiny Planets Coding

The other day on Google+ I read a post from Ana Andres talking about coding Tiny Planets, and how it was running really slow in Matlab somewhere in the pixel drawing code.

That got me thinking about how I would code it in C#, and Ana had a good blog post showing one way of thinking about how the transformation was happening , one of her goals was to make an animated GIF so people could see the wrap bend happening.

Tiny Planet Code A
Tiny Planet Code A
Tiny Planet Code A GIF

Now the this was plenty fast enough, I was getting 2-3 fps on my laptop using only one core, so from a speed perspective moved on.

My first plan was to interpolate between the points (all though that would really slow, and look bad at the outside), but when I showed my co-worker my method, he said I was silly and should just traverse the output space, and find the input pixel that maps, so I did that and got:

Tiny Planet Code C
Tiny Planet Code C

I then had to put the bend code back in, as that was a special aspect needed for the animation and got:

Tiny Planet Code D GIF

I then started playing with sub-sampling to make the output less jaggie (F), and then I decided blend in YUV (or YCrCb) colour space, and finally settled on five point YUV average where you have a center of the pixel and the four 1/4 towards the pixel diagonal corners. Giving:

Tiny Planet Code H GIF

For now the GIF’s at coverted to videos, but if they are to slow I will have to go back to like links

Now improvement on the process would be to allow scaling and translation of the origin on the polar focus, this might be useful in an interactive UI to allow exploring the space.

Here’s my source code for Step H, it was fun to learn to make GIFs in C#.

The full source is part of my Old School Graphics repo on GitHub

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Drawing;
using System.Diagnostics;
using System.Windows.Media.Imaging;
using System.Windows;
using System.IO;

namespace TinyPlanet
{
    class Program
    {
        static void Main(string[] args)
        {
            Bitmap src = new Bitmap(@"C:\temp\tinyplanet-1.png");

            GifH(src);
        }

        static void GifH(Bitmap src)
        {
            GifBitmapEncoder gEnc = new GifBitmapEncoder();

            for (int i = 0; i <= 100; i++)
            {
                Bitmap dst = Bend_H(src, i);
                Debug.WriteLine(string.Format("frame {0}", i));

                var frame = System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap(
                        dst.GetHbitmap(),
                        IntPtr.Zero,
                        Int32Rect.Empty,
                        BitmapSizeOptions.FromEmptyOptions());
                gEnc.Frames.Add(BitmapFrame.Create(frame));
            }

            gEnc.Save(new FileStream(@"C:\temp\tinyplanet-out_H.gif", FileMode.Create));
        }

        static Bitmap Bend_H(Bitmap src, int bend_i)
        {
            int src_half_width = src.Width / 2;
            int dst_width = (src.Height * 2) + src.Width;
            int dst_height = src.Height * 2;
            int dst_origin_x = dst_width / 2;
            int dst_origin_y = dst_height / 2;

            double bend = bend_i / 100.0; // turn to percentage;


            int bend_x = (int)(src_half_width * (1.0 - bend));
            double bent_pixels = src_half_width - bend_x;
            double final_ang_d = 180.0 * bend;
            double final_ang = final_ang_d * (Math.PI / 180.0);


            Bitmap dst = new Bitmap(dst_width, dst_height);
            Graphics g = Graphics.FromImage(dst);
            g.Clear(Color.White);
            g.DrawImage(dst, 0, 0, dst.Width, dst.Height);

            int bend_start_x = dst_origin_x - bend_x;
            int bend_end_x = dst_origin_x + bend_x;

            for (int x = 0; x < dst_width; x++)
            {
                int fix_x = (x < dst_origin_x ? bend_x : -bend_x);
                int ox = (x - bend_start_x) + (src_half_width - bend_x);

                for (int y = 0; y < dst_height; y++)
                {
                    if (x > bend_start_x &&
                        x < bend_end_x)
                    {
                        // rectilinear
                        if (y < src.Height)
                        {
                            dst.SetPixel(x, y, src.GetPixel(ox, y));
                        }
                    }
                    else
                    {
                        // map from output to input
                        int dx = x - dst_origin_x + fix_x;
                        int dy = y - dst_origin_y;

                        int p0 = ARGBAtPoint(dx - 0.25, dy - 0.25, final_ang, ref src, bent_pixels, fix_x);
                        int p1 = ARGBAtPoint(dx - 0.25, dy + 0.25, final_ang, ref src, bent_pixels, fix_x);
                        int p2 = ARGBAtPoint(dx + 0.25, dy - 0.25, final_ang, ref src, bent_pixels, fix_x);
                        int p3 = ARGBAtPoint(dx + 0.25, dy + 0.25, final_ang, ref src, bent_pixels, fix_x);
                        int p4 = ARGBAtPoint(dx, dy, final_ang, ref src, bent_pixels, fix_x);

                        int yy, cr, cb;
                        int yt = 0, crt = 0, cbt = 0;
                        YCbCrFromRGB(p0, out yy, out cr, out cb);
                        yt += yy; crt += cr; cbt += cb;
                        YCbCrFromRGB(p1, out yy, out cr, out cb);
                        yt += yy; crt += cr; cbt += cb;
                        YCbCrFromRGB(p2, out yy, out cr, out cb);
                        yt += yy; crt += cr; cbt += cb;
                        YCbCrFromRGB(p3, out yy, out cr, out cb);
                        yt += yy; crt += cr; cbt += cb;
                        YCbCrFromRGB(p4, out yy, out cr, out cb);
                        yt += yy; crt += cr; cbt += cb;

                        dst.SetPixel(x, y, Color.FromArgb(ARGBFromYCbCr(yt / 5, cbt / 5, crt / 5)));
                    }
                }
            }

            return dst;
        }

        static void YCbCrFromRGB(int RGB, out int Y, out int Cb, out int Cr)
        {
            int r = (RGB >> 16) & 0xFF;
            int g = (RGB >> 8) & 0xFF;
            int b = RGB & 0xFF;
            Y = (int)(0.299 * r + 0.587 * g + 0.114 * b);
            Cb = (int)(128 - 0.169 * r - 0.331 * g + 0.500 * b);
            Cr = (int)(128 + 0.500 * r - 0.419 * g - 0.081 * b);
        }

        static int ARGBFromYCbCr(int Y, int Cb, int Cr)
        {
            int r = Y + (int)((1.4f * (Cb - 128))+0.5);
            int g = Y + (int)((-0.343f * (Cr - 128) - 0.711f * (Cb - 128))+0.5);
            int b = Y + (int)((1.765f * (Cr - 128)) + 0.5);
            return (int)(0xFF000000 + (r << 16) + (g << 8) + (b));
        }


        static int ARGBAtPoint(double dx,
                double dy,
                double final_ang,
                ref Bitmap src,
                double bent_pixels,
                int fix_x)
        {
            double r = Math.Sqrt((dx * dx) + (dy * dy));
            double q = Math.Atan2(dy, dx);

            double pic_ang = q + (Math.PI / 2.0);
            double mod_ang = ((pic_ang + Math.PI) % (Math.PI * 2.0)) - Math.PI;

            if (Math.Abs(mod_ang) <= final_ang)
            {
                int dev_x = (int)((mod_ang / final_ang) * bent_pixels) - fix_x + (src.Width/2);
                int dev_y = (int)r;

                if (dev_x < src.Width && dev_x >= 0 &&
                    dev_y < src.Height)
                {
                    return src.GetPixel(dev_x, src.Height - dev_y - 1).ToArgb();
                }
            }

            return Color.White.ToArgb();
        }
    }

}