We are in a transition process when it comes to technology - coming from a first itteration start-up setup to a setup that takes advantage of forefront technologies - out setup will in the future be widely distributed when it comes to software architecture and hosting architecture.
Below we have listed some of our overall approaches to different main areas of technological development, that we currently are striving toward becoming world champions within - The gain of knowledge within these areas are build solely on having the right human ressources on-board in our technical team and we always have our eyes and ears open to new knowledge areas all the time and are keeping a close relationship with external technical institutes/faculties and are always open to contribuation to new interesting/relevant projects.
Advertising is getting more personalized and customers are learning to expect an entirely seamless experience, machine learning helps to accurately target display advertisements and personalized messaging. By using Machine Learning, Advertisers will have an easier time presenting the right ads to the right customers at the right time also for a better customer experience.
Our use of machine learning are advancing rapidly, which means that we move rapidly into the new world, where the machine learning platform, does real-time analysis of our customers and we use predictive analytics in order to service them in the best suitable way and make them even more loyal to us as a service provider. We track every posssible interaction the user makes and continuously constantly update our model and to improve our make better and better predictions on ourthe customers’ users needs and future behaviours.
We use machine learning heavily when buying traffic to our portals, where we maximise revenue in real-time. We are currently working with supervised learning, but Deep Learning methods are in the pipeline as well. Currently we are mostly working with supervised learning, in practice and are testing Deep Learning methods on our datasets.
Some computations are too expensive to compute on a single node, either because the computation itself is expensive, or because the dataset is massive. To facilitate meaningful computation in such a scenario, we distribute the computation on top of a cluster of nodes, by employing a parallel stream processing engine, which is scalable to thousands of queries and hundreds of nodes.
Our parallel stream processing engine is fault tolerant, which means that if a node fails, the computation can automatically recover without user intervention and guarantee the correct result is returned. Furthermore, it ensures low-latency processing, which means the time from it receives a data item, until it has fully processed that item, is well below one second. There is a large set of queries where low-latency is imperative, e.g. for intrusion detection systems or runtime user-analysis.
Our services are hosted at Amazon Web Services (AWS), as their cloud ensures good scalability, stability, flexibility and performance. We aim to build fully managed services, leveraging powerful services such as Lambda, Data Pipeline and Cognito, while remaining cloud agnostic.
We are currently evaluating the benefits of executing our content-management systems in a managed fashion, as this will improve the scalability and maintainability while lowering the cost of our current setup.