INTEGRATING SERVERLESS AND EDGE COMPUTING: A FRAMEWORK FOR IMPROVED QOS AND RESOURCE OPTIMIZATION
2024
- 41Usage
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Usage41
- Downloads32
- Abstract Views9
Article Description
With the launch of Google App Engine in 2008, serverless computing emerged as a popular cloud computing paradigm, simplifying application deployment by delegating infrastructure management to cloud providers. These providers handle server and resource management through automated provisioning and scaling. In addition, the ephemeral nature of serverless functions allows for the de-provisioning of resources and offers a granular pay-per-use pricing model, charging users only based on the invocations, which facilitates cost savings. This research explores the intersection of serverless and edge computing, leveraging the lower latency, reduced resource consumption, and improved energy efficiency of edge environments to enhance the performance of serverless functions and maintain service continuity in a Multi-access Edge Computing (MEC) environment. We propose a framework that proactively spawns multiple instances of functions based on predicted user movements, increasing solution reliability. To further optimize function deployment and relocation times, we introduce server selection criteria, a caching mechanism, and a distributed image registry to improve image pulling and layer sharing processes. Numerical results and experiments show that these strategies effectively reduce relocation times and frequency, lower energy consumption, and optimize network usage.
Bibliographic Details
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know