Machine Translations

Machine translations


  1. Machine translation gets produced

  2. The webpage is presented to the community with the information that you can contribute and

earn $BRIGHT for suggesting more proper wording.

  1. The suggested wording gets reviewed independantly by a reverse machine translation and

confirmed by volunteers.


An amount of words in sentences that require moderation is evaluated by using a text comparison method of the original text source and the translated text. Maximum correction threshold is established.

Parts of the machine translation that do not match by text comparison method are highlithed as ‘to be moderated’.

The estimated cost of a full professional translation minus the cost of a machine translation is then divided by the amount of words that require moderation, in order to yield with the minimum corrector’s revenue per word.


Lets take the first page of the Gitbook wiki:

„BrightID lets apps make sure their users don’t have multiple accounts so that everyone has a fair share and a better experience.

Each person has an account on BrightID and makes connections with people they know. A graph is formed with the accounts and their connections and fake accounts are detected by analyzing the graph.

BrightID is a stepping stone to create a fair digital world for all citizens of the world.„

73 words, 416 characters,

And now the original text translated to Spanish using google.translate, and translated back from Spanish to English – again, using google.translate:

„BrightID allows applications to ensure that their users do not have multiple accounts so that everyone has a fair share and a better experience.

Each person has a BrightID account and connects with people they know. A graph is formed with the accounts and their connections and the fake accounts are detected by analyzing the graph.

BrightID is a stepping stone to creating a just digital world for all citizens of the world.”

73 words, 425 characters.

Both texts have been compared using^

By characters, both texts show to be 77.8% in common with each other, and 22.2% different.

There are 368 common symbols and 105 different symbols.

From „Gitbook Spanish Translation” forum thread on, we get the following data regarding translation costs:

Translation cost information:

Number of words: 4825

Number of images: 67

2 videos: one of 1 min and another of 1:32

Total cost: 800 Bright (~$400)

Now, lets do some maths.

800/4825 yields with approximately one sixth of $BRIGHT, or around 0.16 per word.

For the sake of simplification, lets assume the overall machine translation text inaccuracy calculated by to be evenly distributed across all the text (which isn’t the case, but the more accurate data can be fairly easily provided at the cost of some work).

Now, given 4825 words and 22.2% translation inaccuracy, we end up with maximum of 1061 words that require moderation.
Assuming the cost of a machine translation and accuracy data provision to be 150 $BRIGHT, we get


650/1061=~0.61 $BRIGHT of revenue per word, evenly distributed to the participating community, that means, text moderators/correctors. That’s about 3.8 times more profit per word earned by a contributor, at the same expenses of the DAO.

But now, lets look at the reverse machine translation again. Majority of the differences were just different ways to describe the same subject. We’ve got ‘allow applications to ensure’ instead of ‘lets apps make’, as in the original;- there are some differences on the times used; but overall, I’d actually conclude the reverse machine translation to use more proper wording than the original text.

That means the message to deliver had been made apparent and sound, and therefore, does not require additional moderation whatsoever.

Now, it is the human factor to determine whether a machine translation is correct or not, which is not a difficult task, as all you’ve got to do is, actually read the reverse machine translation and side-by-side check for any major flaws that got to be highlithed in the original method, but it only means one thing: the profit earned by a native speaker is an even greater ratio, and even makes me consider manually checking the internet, looking for the proper wording, just to earn some income. :slight_smile:

So, lets summarize the pros of what I’ve presented:

  • You can easily deploy a worldwide, global, marketing campaign in all languages in a relatively very short interval of time.
  • You can evenly distribute the funds across the community for the contributions they make.
  • Greater probability of the application going viral in multiple countries at once.
  • It’s a great PR move to have an organization reward their customers for what would have to be done either way.
  • The community learns and earns the value of what they can do.

Now, what are the cons of this translation approach?

I can see two: the final text will likely have to be edited multiple times, in order to have a final product, and depending on whether you agree with this method, or not – it would require a collaboration team of editors x contributors and the involved community, and that would require some dedicated information channels to function properly.

I’d like to hear out your thoughts, and if you approve of this technique – I’ll be to deliver.

Thank you in advance, Bukaj.

Writing this up for buddy8914 @ Fiverr.

Cost: €176 (Around 540 $BRIGHT at the current exchange rate - note by Bukaj)

I will proofread and review japanese translation

If you accept this, the order will begin and I will be paid through Fiverr.

Your offer includes

7 Days Delivery

Additional notes (by Bukaj):

I will have to exchange the $BRIGHT before the offer gets accepted.

Payment through Fiverr or Paypal.

Files must be in excel or word format (I’ll do this ~~Bukaj)

Page 1 - Japanese ← here’s the machine translation that’ll get reviewed.

  • Accept the offer.
  • Don’t accept the offer.

0 voters