Privacy deserves plain English
Privacy policies are necessary, but they are not always easy to read. In practice, most travelers want simpler answers:
- What data do you collect?
- Why do you need it?
- Who do you share it with?
- Do you sell it?
- What happens when the search includes something sensitive?
Those are fair questions. They matter even more in travel, where a search can include passenger details, booking contacts, and sometimes age-related information that is more sensitive than a basic route query.
What we collect during search and booking
At a high level, Travnexa collects the information needed to operate the product: account details, search requests, booking details, support messages, and the operational records required to keep those flows working.
In the search flow, that can include details such as:
- origin and destination
- travel dates
- passenger mix
- cabin preference
- timing constraints
- passenger ages when the search needs them
That last point matters. We encourage travelers to search with ages when ages help produce a better result, especially for children and infants. That makes the search more accurate, but it also means we have to treat that data carefully.
We do not sell personal data
That point should be direct: Travnexa does not sell your personal data.
We may share the minimum information needed with providers that help us deliver the service, such as airline connectivity partners, airlines, payment processors, email providers, and hosting infrastructure. That is operational sharing, not data brokerage.
The difference matters. A travel platform cannot complete bookings or manage post-booking workflows without using service providers. But that is very different from treating personal data as something to be packaged and sold.
Sensitive search details should be handled carefully
Age data is a good example. Age can be necessary for a valid search or booking flow, but necessity does not remove sensitivity.
Our current codebase already reflects that distinction in one important place: when saved searches include passenger ages, those ages are stored encrypted at rest rather than left in plain text.
That is the right instinct. Not every useful piece of search input should become easily readable stored data.
What protection looks like today
Today, the privacy baseline includes a few practical protections:
- HTTPS and TLS for data in transit
- restricted access to operational data
- account and session controls such as verification and short-lived access tokens
- encryption for sensitive retained search details such as passenger ages in saved searches
That does not mean every privacy question is finished forever. It means the product should keep moving in the direction of minimizing exposure, narrowing retention, and being honest about what is stored and why.
Why age-based search creates a real design challenge
There is a tension here worth acknowledging openly.
On one hand, age-based search can improve correctness. It helps the system represent adults, children, and infants more accurately and can reduce ambiguity in the downstream request.
On the other hand, as soon as you ask for age, you are handling data that deserves more care than a basic route and date search.
That is why privacy in search is not only about legal language. It is also about system design. If a product asks for more precise input, it should raise the standard for how that input is protected.
Infrastructure location matters too
Infrastructure is part of the privacy story, not separate from it.
Part of the current setup includes a server hosted in London, United Kingdom. That does not answer every privacy question on its own, but location, hosting, and data flow still matter because they shape who operates the environment and where processing happens.
Privacy is not only about whether data is encrypted. It is also about how far it travels, which systems touch it, and how much of that movement is actually necessary.
Why local extraction is worth exploring
One direction worth serious consideration is running intent extraction locally rather than depending on remote model processing for that step.
If flight-intent extraction were run locally with a local model, the privacy benefits could be meaningful:
- less raw search text leaving the local environment
- more control over how sensitive input is processed
- simpler data-boundary reasoning
- a clearer trust story for travelers who care where their search text goes
That is best described as a future direction, not a claim about the current production setup. But it is a sensible direction because privacy improves when fewer systems need access to the raw search in the first place.
The goal is not just compliance
The real goal is not to write the longest policy or the most reassuring sentence. The goal is to build a product where privacy decisions make sense at every layer:
- what we ask for
- what we store
- what we encrypt
- what we share
- what we avoid collecting unnecessarily
That kind of privacy posture is more useful than vague promises because it turns trust into something concrete.
The practical takeaway
Travelers should not have to choose between a more accurate search and a more private product.
If a platform asks for sensitive details like passenger ages, it should be able to explain why, protect that information properly, and keep improving the system so less sensitive text leaves the local environment over time.
That is the standard worth aiming for: clear collection, minimal sharing, no sale of personal data, stronger protection for sensitive retained details, and a product roadmap that keeps privacy close to the architecture rather than treating it as an afterthought.