Weaving a wider spidersilk: Optimizing ads placement using web crawl data
This research was able to build a proof of concept for creating an algorithm that can extract commonalities between webpages through their links contained in the common crawl dataset. With this, the information on the level of similarity can be elevated to ads platforms where the webpages connecting...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Published: |
Animo Repository
2021
|
Subjects: | |
Online Access: | https://animorepository.dlsu.edu.ph/faculty_research/11127 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | De La Salle University |
Summary: | This research was able to build a proof of concept for creating an algorithm that can extract commonalities between webpages through their links contained in the common crawl dataset. With this, the information on the level of similarity can be elevated to ads platforms where the webpages connecting them can be analyzed further through association rules generated in implementing the Frequent Itemset Mining process. These rules aid in giving insights regarding the similarity in the rollouts by ads platforms, showing how extensive the commonalities are in the connections to different webpages. With keywords prefiltering, an applied contextual layer enhances the algorithm as it caters to more specific industries enabling a targeting mechanism making it more powerful in placing ads where an intended user wants a specific content to be improved in visibility. |
---|