Developer Notes


Bulltrackers was developed with the idea of making access to data on the market, cheap.

Look around at the cost of seeing granular data on market behaviour, you have to shell out hundreds - and more likely thousands - every month for access to world leading tools.

This is because access to this data is heavily protected. The companies providing this, must pay upfront API costs just to display that data.

But Bulltrackers is different. We don't pay for any API. We built our own. This makes our product drastically cheaper, whilst offering market level data.

For our data, we use the eToro broker, and using a custom API, we can track the behaviour of over 15 million users every day, at a cost of less than a tenth of a penny per user.

Almost all our running costs are derived from the servers processing this data, which run across 30 countries and more than 56 data centres.


Informal Update logs

Bulltrackers is run by 1 guy, with a mission.

To provide you, and every market participant, access to data that is affordable. 

This idea was borne out of a project started in 2023, and gradually coming to reality throughout the first half of 2025.

However, the costs of this project - despite being cheap relative to other competitors - are still outside of the budget we have. 

Ok, I should stop talking about myself in the plural. 

Yes, I can't afford this. Perhaps a few months, maybe at a big stretch a year, but as the storage of data scales, my costs rise too.

I need your help.

Help me build a tool that will transform the data retail can access.

Change the landscape of the market and finally empower retail with the data to beat the market.

07/05/25

This week we released the biggest update the site has ever seen.

Finally the backend is complete, after 2 years, and a lot of stress, far more than any 20-odd year old should experience - I consider this complete.

The final technical processes included creating the scheduling mechanisms to fetch users. Due to cost limitations, I have configured a limit on the number of users processed each day. This limit is roughly 300,000 randomly sampled users from a database of over 12 million public accounts.

It costs me roughly $40 a day which is considerable for a site that generates no revenue, built by some dude who doesn't exactly have a huge spare chunk of cash and not a large enough income to fund eternally. I am taking out my savings account to fund basically every new change, that shows how much I believe in this.

The backend is now in excess of 40GB of program scripts and is deployed across over 30 countries and 56 datacentres - with the ability to easily scale this higher, and lower as required. A single new line of code adds a set of 8 new computers in a single MIG.

If and when revenue of the project rises - or even begins - I am able to easily modify the site to increase the scale of computations, however for now, there is a large enough base sample of users' and a continual daily additional sample, to suffice.

Mathematically, there is little difference between a random sample of 200,000 users and 1 million users.

The backend is configured to process approximately 2,500 randomly sampled users, every 20 minutes, allowing a daily process range between 180,000 and 220,000.


As we can see from this chart, after the update released at 16.50, we saw a drastic drop in the undelivered messages originating from the mailing system, which passes each username into a letter to then forward to our distributed server network.

And now that the scheduler is set to process 2k users every 20 minutes, we see a consistent zig-zag pattern, that shows our system can comfortable handle the backlog within the allocated time and be prepared to process the next set of users.

This is an extremely efficient update, that results in minimal costs, zero backlog and a predictable process. 

Now that this system is fully functional, the database storing all these computed metrics is populated regularly enough to provide intraday data consistently throughout the day, and night.

In the future, I will build a similar model that specifically focuses on fetching users' identified as being high frequency traders and day traders, this will allow me to build a more customised SQL table to analyse intraday changes in a more limited group of users. However first this will require a large enough sample of such users and their trading strategy, as well as being able to afford such an update which would likely be quite expensive - but very much worth the cost.


Got suggestions?

At Bulltrackers, we are always looking for new data to show. Our database is so large, we are full of new ideas every day and work hard to produce them. Our backend is a comprehensive library of information, aggregated and processed constantly through a huge range of servers distributed across the globe. 

If you have an idea for something you want to see data on, let us know. We will get back to you on the feasibility within a day, and aim to build it within a week. At no extra cost!