Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Neural Network-based Autonomous Allocation of Resources in Virtual Networks Rashid Mijumbi, Juan-Luis Gorricho, Joan Serrat Maxim Claeys, Jeroen Famaey, Filip De Turck Universitat Politècnica de Catalunya, Ghent...

Neural Network-based Autonomous Allocation of Resources in Virtual Networks Rashid Mijumbi, Juan-Luis Gorricho, Joan Serrat Maxim Claeys, Jeroen Famaey, Filip De Turck Universitat Politècnica de Catalunya, Ghent University - iMinds, 08034 Barcelona, Spain B-9050 Gent, Belgium Abstract—Network virtualisation has received attention as a link as an ANN whose input is the network resource status and way to allow for sharing of physical network resources. Sharing the output an allocation action. We then use a reinforcement- resources involves mapping of virtual nodes and links onto like error function to evaluate the desirability of ANN outputs, physical nodes and links respectively, and thereafter managing and hence perform online training of the ANN. the allocated resources to ensure efficient resource utilisation. In The rest of the paper is organised as follows: We present this paper, we apply artificial neural networks for a dynamic, related work in Section II. Section III gives a brief theoretical decentralised and autonomous allocation of physical network resources to the virtual networks. The objective is to achieve background on the resource management problem in network better efficiency in the utilisation of substrate network resources virtualisation, and ANNs. The proposed approach is presented while ensuring that the quality of service requirements of the in Section IV and evaluated in Section V. Finally, Section VI virtual networks are not violated. The proposed approach is concludes the paper, giving an outlook for future work. evaluated by comparison with two static resource allocation schemes and a reinforcement learning-based approach. II. R ELATED W ORK Keywords—Artificial neural networks, network virtualisation, resource allocation, reinforcement learning, autonomous systems. A comprehensive survey on the state-of-the-art in VNE can be found in. Most of the approaches perform a I. I NTRODUCTION static embedding without any considerations for possibilities of adjustments to initial embeddings, while those that propose Network Virtualisation allows substrate network (SN) dynamic solutions do allocate a fixed amount of node and link owners to lease out part of their infrastructure as a service to resources to the VNs throughout their life time. Since network service providers who create virtual networks (VN) to provide load varies with time due to non-uniform user traffic, allocating end-to-end services to end-users. A VN is made up of a set of a fixed amount of resources based on peak load could lead to virtual links and nodes which are supported by SN physical an inefficient utilisation of overall SN resources, especially paths and nodes respectively. Efficient sharing of SN resources during periods when the virtual nodes and/or links are lightly amoung VNs can be achieved in two steps. The first, known loaded. as virtual network embedding (VNE) , involves mapping of Existing work on DRA is based on three main approaches: virtual nodes and links to substrate nodes and paths, subject control theory, performance dynamics modelling and workload to a set of pre-defined constraints (e.g. topology, node queue prediction. For example, is a control theoretic approach, size and bandwidth). The second step dynamically manages the is based on performance dynamics, while uses workload allocation of resources to virtual nodes and links throughout prediction. The difference between our proposal and these the lifetime of a VN. works is not only with respect to the solution tool (ANN), While many approaches have been proposed for man- but also in application domain (network virtualisation). DRA agement of resources in virtual networks, the number of in VNs presents additional challenges as we have to deal with decentralised and dynamic solutions is still limited. In different resource types (such as bandwidth and queue size) a previous work , we proposed a decentralised scheme which are not only segmented into many links and nodes, but for dynamic resource allocation (DRA) in VNs based on also require different quality of service guarantees. In addition, reinforcement learning (RL) and a look-up table based in a VN environment, the managed resources are dependent on policy. Since a look-up table representation suffers from the each other, for example, a given virtual link can be mapped curse of dimensionality , the state and action space was on more than one substrate link, and the resources allocated discretised to limit the size, as well as the high memory to a virtual node may affect the performance of virtual links for reading, writing and storage of the learning policy. This attached to it, say in terms of increased routing delays. however comes at a cost of efficiency, as the learning algorithm A combination of ANNs and RL has been applied to many is constrained in terms of perception and action granularity. problems such as , ,. In these proposals, ANNs are In this paper, we improve the efficiency of by proposing used as function approximators for the RL policy. The proposal an autonomous system based on artificial neural networks in this paper differs from these works on two fronts: (1) we use (ANN) to achieve an adaptable allocation of resources RL to train the ANN rather than using ANNs to approximate to virtual networks, without restricting the input-output space the RL policy. This, remarkably, allows us the possibility to do dimensions. We start by representing each substrate node and away with the need for training examples and/or target outputs usually needed for learning in neural networks1 , and (2) we SN-VN State Input – Hidden Hidden – Output VN Resource apply the combination ANN and RL to a network virtualisation (Crisp Inputs) Layer Weights Layer Weights Allocation (Action) Evaluative environment. Feedback ܴ௨௭ ANN III. T HEORETICAL BACKGROUND Error Signal This Section introduces the two main steps−virtual net- work embedding (VNE) and dynamic resource allocation ܴ௨௩ Error Action, a (DRA)−involved in resource management in VNs. We also introduce artificial neural networks (ANN). Function ܴ௔௩ A. Virtual Network Embedding (VNE) INPUT LAYER HIDDEN LAYER OUTPUT LAYER VNE involves mapping of VNs onto a SN, and is initiated by a service provider (SP) specifying resource requirements for both nodes and links to an infrastructure provider. The Performance Evaluation State, s specification of VN resource requirements is usually repre- Agent that Node CPU sented by a weighted undirected graph Gv = (Nv , Lv ), where manages a Nv and Lv represent the sets of virtual nodes and links Multi-Agent System node respectively. Each virtual link lij ∈ Lv or virtual node i ∈ Nv usually has requirements such as maximum delay, CPU, queue size, bandwidth etc. In a similar way, the SN node and link capacities can be represented. For a successful mapping, all the VN node and link mappings should be in accordance to the Substrate Network VNE constraints. VNE is out of the scope of this work. Any of the static approaches in can be used for this stage. B. Dynamic Resource Allocation (DRA) The next step, which is the focus of this paper, follows Fig. 1. Artificial Neural Network-based Resource Allocation Model a successful VNE. It involves the lifecycle management of resources allocated/reserved for the mapped VN, and is aimed at ensuring optimal utilisation of overall SN resources. Our of one or more neurons. The most commonly used structure consideration is that SPs reserve resources to be used for trans- of ANNs is made up of 3 layers; an input layer, a hidden mitting user traffic, and therefore, after successful mapping of layer, and an output layer. The learning ability of neural a given VN, user traffic is transmitted over the VN. Actual networks lies in their ability to adjust their weights. This is usage of allocated resources is then monitored and based on achieved by gradually minimising an error, defined as the the level of utilisation, we dynamically and opportunistically difference between an actual output and a target output. The adjust allocated resources. The opportunistic use of resources most popular method for learning in ANNs is called back- involves carefully taking advantage of unused virtual node and propagation (BP). In BP, after an output is obtained, an link resources to ensure that other VN requests are not rejected error signal - which is the difference between actual output when resources reserved to already mapped VNs are idle. It is and target output - is determined. The error signal is then however a delicate balancing approach that ensures that while “propagated backwards” from the output layer to the input VNs should not have idle resources, the allocated resources are layer, adjusting the network weights. Therefore, learning in sufficient to ensure that the quality of service parameters such ANNs requires that for every input, a target output must be as packet drop rate and delay for the VNs are not affected. In known so as to determine the error. An introduction to ANNs, Section IV, we detail the proposed ANN approach. and the back propagation algorithm (and how it is used for learning in ANNs) can be found in. C. Artificial Neural Networks (ANN) IV. ANN- BASED DYNAMIC R ESOURCE A LLOCATION ANNs are collections of computing nodes known as neu- rons which operate as summing devices, interconnected by The system model used for our proposal is shown in Fig. links. A neuron receives one or more inputs, which are 1. As can be observed from the figure, there are three main first multiplied by weights along each link, and then summed components: the multi-agent system representing the substrate to produce an output. The output is then passed through an network, the ANN that represents the internal components of activation function (such as the logistic function ), which each agent, and the evaluative feedback block that produces determines the input-output behaviour of the neuron. In ANNs, the error signal. In the following subsections, each of these neurons are arranged in layers, with each layer consisting elements of the model is detailed. 1 While we still use some training examples in our proposal (see Section IV-B3), it is only aimed at guiding in the ANN structure design as well A. Multi-Agent System as ensuring a faster convergence of the algorithm (through problem specific weight initialisation) rather than as a requirement as would have been in a The multi-agent system consists of all the agents that typical ANN learning scenario. represent the SN. Specifically, each substrate node and link In order to achieve this, we need a test dataset. The dataset 0.1 used for this purpose was saved from the q-table of a RL Room Mean Square Error Optimal Number of Hidden approach proposed in. This q-table was a result of a learning system for a similar DRA task and it gives the state- Layer Neurons action-values for the learning task. This dataset contains 512 0.08 entries, each showing the best action value in each of the possible 512 states. 0.06 With the above training set, a 10−fold cross-validation was performed in Weka 3.6 , using the default parameters (learning rate, validation threshold, momentum, etc.) for the multilayer perceptron in Weka. Fig. 2 shows the average root 0.04 mean square error (RMSE) values for 20 experiments. From 0 2 4 6 8 10 12 14 16 the figure, the optimal number of neurons for the hidden layer is 4. The reason for choosing to use 4 neurons is not only Number of Neurons in Hidden Layer to allow for a low RMSE, but also to avoid a possibility of over-fitting which could be caused by a network with many Fig. 2. Variation of RMSE with Number of Neurons in Hidden Layer neurons. This experimentation also provides initial weights that are used to initialise the neural network, and hence avoid the slow learning characteristics (and hence slow convergence) of is represented by a node agent na ∈ Na and a link agent BP. la ∈ La , where Na and La are the sets of node agents and link agents respectively. The node agents manage node queue C. Evaluative Feedback sizes while the link agents manage link bandwidths. The agents dynamically adjust the resources allocated to virtual nodes and After each learning episode, the affected substrate and links, ensuring that resources are not left under-utilised, and virtual nodes/links are monitored, taking note of average that enough resources are available to meet VN requirements. utilisation of substrate resources, the delay on virtual links As shown in Fig. 1, a given agent receives as input the state, and packets dropped by virtual nodes due to buffer overflows. s of the substrate node it manages, and outputs an action, a. These values are fed back to the agent in form of a perfor- mance evaluation, used by the error function to produce an B. Artificial Neural Network error signal, which is used by the BP algorithm to adjust the weights of the ANN, and hence improve future actions. Our proposal uses a 3-layer ANN. An important design issue of any ANN is determining network topology, i.e. number Error Function: The error e(v), is an indication of the of neurons in each of the network layers. deviation of the agent’s actual , from a target action. The objective of the error function is to encourage high virtual 1) Input Layer: We model the state of any virtual resource resource utilisation while punishing na ∈ Na for dropping (node queue size or link bandwidth) v hosted on a substrate packets and la ∈ La for having high delays. Good actions by resource z, by a 3-tuple, s = (Rav , Ruv , Ruz ), where Rav an agent are characterised by an e(v) equal or close to 0, while is the percentage of the virtual resource demand currently any deviations indicate undesirable actions. Therefore, the allocated to it, Ruv is the percentage of allocated resources value of e(v) gives the degree of desirability or undesirability currently unused, and Ruz is the percentage of total substrate the agent’s action, and is dependent on resources allocated to resources currently unused. Therefore, the input layer consists the virtual resources, unutilised resources, link delay in case of 3 neurons, one for each of the variables Rav , Ruv and Ruz. of la ∈ La and the number of dropped packets in the case of na ∈ Na. The proposed error function is shown in (1). 2) Output Layer: During each learning episode, an agent ⎧ should perceive the state s and give an output. This output,  ⎪ ⎪ v a, is a scalar that indicates which action should be taken to ⎪ ⎨ uR + αP v ∀na ∈ Na (1a) change the resource allocation for the virtual resource v under e(v) =   ⎪ ⎪ v consideration. The action may be aimed at increasing (if it ⎪ ⎩ Ru + βDv ∀la ∈ La (1b) is positive) or reducing the resources allocated to any virtual node or link respectively. Therefore, the output layer consists of 1 neuron, representing the action, a. To illustrate the effect where α and β are constants aimed at ensuring that the of an action, if a given virtual resource, v has total allocated magnitudes of the two terms in each of (1a) and (1b) are resources vr and the agent action is a (where −1 ≤ a ≤ +1), comparable. The values α = 0.05 and β = 40 used in this then the resulting resource allocation is: vr = vr + a × vt , paper, were determined by simulations. For example, looking where vt is the total initial demand of the virtual resource (as at Fig. 5 shows that the maximum value of Pv is about 20. specified in the VN request before the VNE). Therefore, to make these values comparable to 0 ≤ Ruv ≤ 1, we divide each value by 20 (multiply it by α = 0.05). Pv 3) Hidden Layer: The optimal number of neurons in a is the number of packets dropped by node na ∈ Na from hidden layer of any ANN is problem specific, and is still an the time the allocation action was taken, and Dv is the extra open research question. In this paper, we determine this delay encountered by a packet using la ∈ La. The extra delay number by experimentation. We perform a search from number is calculated as the difference between actual delay and the of hidden layer neurons, NHL = 1 to 15. theoretical delay. We define theoretical delay as the delay TABLE II. NS3 PARAMETERS Parameter Value TABLE I. N ETWORK T OPOLOGY PARAMETERS Queue Type Drop Tail Parameter Substrate Network Virtual Network Queue drop Mode Bytes Name (Model) Router Waxman Router Waxman Maximum Queue Size 6,553,500 Bytes Size of main plane (HS) 250 250 Maximum Packets Per VN 3500 Packets Size of inner plane (LS) 250 250 Number of VNs 1024 Node Placement Random Random Network Mask 255.255.224.0 GrowthType Incremental Incremental IP Adress Range 10.0.0.0 − 10.255.224.0 Neighbouring Nodes 3 2 Network Protocol IPv4 alpha (Waxman Parameter) 0.15 0.15 Transport Protocol TCP beta (Waxman Parameter) 0.2 0.2 Packet MTU 1518 Bytes BWDist Uniform Uniform Packet Error Rate 0.000001 per Byte Error distribution Uniform (0, 1) Port 8080 TABLE III. SN AND VN P ROPERTIES TABLE IV. C OMPARED A LGORITHMS Parameter Substrate Network Virtual Network Code Resource Allocation Approach Minimum Number of Nodes 25 5 Maximum Number of Nodes 35 15 D-ANN Dynamic, based on Artificial Neural Networks [Our Contribution] Minimum Node Queue Size (100 × 1518) Bytes (10 × 1518) Bytes D-RL Dynamic, based on Reinforcement Learning Maximum Node Queue Size (200 × 1518) Bytes (20 × 1518) Bytes S-CNMMCF Static, Coordinated Node Mapping and MCF for link mapping Minimum Link Bandwidth 2.0Mbps 1.0Mbps S-OS Static, link based optimal one shot Virtual Network Embedding Maximum Link Bandwidth 10.0Mbps 2.0Mbps the virtual link would have if it was allocated 100% of its LTS Virtual Machine with 4.00GB RAM and 3.00GHz CPU bandwidth demand2. The actual delay is determined as the specifications. difference between when a packet is received at one end of the link, to when it is delivered to the other end. Once the B. Simulation Parameters error is determined, the ANN weights are adjusted using BP. Both substrate and virtual networks were generated on a 250 × 250 grid. The queue size and bandwidth capacities of V. P ERFORMANCE E VALUATION substrate nodes and links as well as the demands of virtual networks are all uniformly distributed between minimum and A. Simulation Environment maximum values shown in Table III. Link delays are as determined by Brite. Each virtual node is allowed to be located To evaluate our proposal, SN and VN topologies were within a uniformly distributed distance 75 ≤ x ≤ 150 of its generated using Brite with settings shown in Table I. requested location, measured in grid units. We assumed that Thereafter, VN requests arrive, one at a time to the SN. When- VN requests arrive following a Poisson distribution with an ever a VN request is accepted by the SN, the VN topology is average rate of 1 per minute. The average service time of each created in NS3 (we added a network virtualisation module VN is 60 minutes and is assumed to follow a negative expo- to NS3 based on parameters in Table II). Our NS3 module nential distribution. The ANN algorithm runs every minute3. allows us to create a traffic application for each accepted VN request, and the traffic application starts transferring packets C. Compared Algorithms over the VN. The traffic application generates packets based on real traffic traces from CAIDA anonymised Internet traces We compare the performance of our proposal with 3 repre-. This dataset contains anonymised passive traffic traces sentative state-of-art solutions. The first, , uses RL for DRA; from CAIDA’s equinix-chicago and equinix-sanjose monitors the second performs a coordinated node and link mapping on high-speed Internet backbone links, and is mainly used ; and the third is also a static baseline formulation that for research on the characteristics of Internet traffic, including performs a one shot mapping, and also used in performance flow volume and duration. The trace source used in this evaluations in. The solution in was adapted to fit paper was collected on 20th December 2012 and contains over into our formulation of the problem. In particular, for 3.5Million packets. We divide these packets amoung 1000 the link delay requirements were neglected at the embedding VNs, so that each VN receives about 3500 packets. These stage, and for this reason, it is not used in QoS evaluations. traces are used to obtain packet sizes and time between packet In addition, our consideration in this paper is for unsplittable arrivals for each VN. As the source and destination of the flows. We identify and name the compared approaches in table packets are anonymised, for each packet in a given VN, we IV. The mathematical programs in all proposals are solved generate a source and destination IP address in NS-3 using a using CPLEX 12.5. uniform distribution. Simulations were run on an Ubuntu 12.04 3 It is worth remarking that while this paper has not studied the effect of the frequency of running the algorithm, we expect that a lower running frequency 2 The simulations in this paper determine this value from a parallel simula- would make the dynamic allocation become comparable to the static one, tion using a virtual network with 100% resource allocation. while a higher frequency might negatively impact system stability. 1 1 S-OS D-ANN D-RL S-CNMMCF Resource Utilisation Acceptance Ratio 0.8 0.8 0.6 0.6 0.4 0.4 S-OS S-CNMMCF 0.2 0.2 D-ANN D-RL 0 0 0 200 400 600 800 1000 100 400 700 1000 Total Requests Total Requests Fig. 3. Acceptance Ratio Fig. 4. Resource Utilisation 20 0.025 Packet Delay Variation (s) S-OS D-ANN D-RL Packet Drop (packets) S-OS D-RL D-ANN 15 0.020 10 0.015 5 0.010 0 0.005 0 500 1000 1500 2000 0 200 400 600 800 1000 Total Number of Packets Thousands Learning Episode Thousands Fig. 5. Number of Dropped Packets Fig. 6. Delay Variation D. Performance Metrics E. Discussion of Results The simulation results are shown in Figs. 3 − 6. As can 1) Embedding Quality: We define embedding quality as be seen from Fig. 3, while both dynamic approaches perform a measure of how efficiently the algorithm uses the SN better than the static ones in terms of VN acceptance ratio, resources for accepting VN requests. This is evaluated using the ANN approach outperforms all three. The reason for the the acceptance ratio and the average level of utilisation of dynamic approaches performing better than the static ones is SN resources. The acceptance ratio is a measure of the long that in former cases, the substrate network always has more term number of VN requests that are accepted by the substrate available resources than in the later case, which is a direct network. The average level of utilisation of substrate resources result of allocating and reserving only the required resources is a measure of how efficiently the SN resources are used. for the virtual networks. The fact that ANN outperforms the RL approach can be attributed to the fact that the ANN approach models the states and actions with better granularity i.e. 2) Quality of Service: We use both packet drop a well as without restricting the states and actions to few discrete levels. delay variation as indications of the quality of service. As We also note that S-OS has a better acceptance ratio than S- shown in Table II, we model the networks to drop packets CNMMCF. This is due to the fact that since S-CNMMCF due to both node buffer overflow as well as packet errors. We performs node and link mapping in two separate steps, link determine the packet delay variation as the difference in delays mappings could fail due to locations of already mapped nodes. encountered by packets transmitted over the network over two successive time intervals, while packet drop is the number of Fig. 4 shows the average utilisation of SN resources. It packets dropped by a given VN during a given time interval. can be observed that except for S-CNMMCF, the other three For these evaluations, the time interval used to update these approaches on average use the same amount of SN resources. measurements corresponds to the transmission of every 100 The fact that S-CNMMCF has a lower resource utilisation packets. is expected as a result of having slightly more resource requests rejected either due to a node mapping that makes R. Mijumbi, J.L. Gorricho, J. Serrat, M. Claeys, J. Famaey, and link mapping impossible, or for previous link mappings using F. De Turck. Contributions to efficient resource management in virtual networks. In Proceedings of the IFIP 8th International Confer- more resources. The fact that S-OS, D-RL and D-ANN all ence on Autonomous Infrastructure, Management and Security (AIMS), have on average the same resource utilisation profile is mainly AIMS2014, 2014. due to all of them having the same initial mapping algorithm R. Mijumbi, J.L. Gorricho, J. Serrat, M. Claeys, F. De Turck, and (which is S-OS). It can however be noted that while S-OS, S. Latre. Design and evaluation of learning algorithms for dynamic D-RL and D-ANN all have similar resource utilisation levels, resource management in virtual networks. In Proceedings of the D-ANN uses these resources to achieve a higher acceptance IEEE/IFIP Network Operations and Management Symposium (NOMS), NOMS2014, 2014. of VNs, which further confirms the extra resource allocation efficiency of the ANN approach. Richard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, USA, 1st edition, 1998. Fig. 5 shows that S-OS has an almost constant packet drop Fu Qi-ming, Liu Quan, Cui Zhi-ming, and Fu Yu-chen. A reinforcement rate while that for D-RL and D-ANN is initially high, but learning algorithm based on minimum state method and average reward. Computer Science and Information Engineering, World Congress on, gradually converges to that of S-OS. At the beginning of the 5:534–538, 2009. learning processes, the dynamic approaches vary the queue Jeff Heaton. Introduction to Neural Networks for Java, 2Nd Edition. sizes quite considerably leading to more packet drops. The Heaton Research, Inc., 2nd edition, 2008. fact that D-ANN has a lower packet drop rate than D-RL over Wenping Pan, Dejun Mu, Hangxing Wu, and Lei Yao. Feedback the learning period can be explained since D-ANN has better Control-Based QoS Guarantees in Web Application Servers. In HPCC, granularity in perceiving the state of resources and allocation. pages 328–334. IEEE, 2008. We also note that the initial drop rate of D-ANN is lower than Rui Han, Li Guo, M.M. Ghanem, and Yike Guo. Lightweight resource that of D-RL which can be attributed to the weight initialisation scaling for cloud applications. In Cluster, Cloud and Grid Computing (CCGrid), 2012 12th IEEE/ACM International Symposium on, pages obtained from Weka (See Section IV-B3). 644–651, May 2012. Finally, Fig. 6 shows that packet delay variations for the Y. Hu, J. Wong, G. Iszlai, and M. Litoiu. Resource provisioning for two dynamic approaches is initially higher but reduces over the cloud computing. In Conference of the Center for Advanced Studies on Collaborative Research, pages 101 –111, 2009. learning period. Once more, these differences are attributed to Junfei Qiao, Ruiyuan Fan, Honggui Han, and Xiaogang Ruan. Q- the initial learning period, and the difference between D-RL learning based on dynamical structure neural network for robot nav- and D-ANN is due to better options in perception and action igation in unknown environment. In ISNN (3), volume 5553 of Lecture for D-ANN, as well as the weight initialisation in D-ANN. Notes in Computer Science, pages 188–196. Springer, 2009. Shi chao Wang, Zheng xi Song, Hao Ding, and Hao bin Shi. An improved reinforcement q-learning method with bp neural networks in VI. C ONCLUSION robot soccer. In ISCID (1), pages 177–180. IEEE, 2011. This paper has proposed a distributed and dynamic ap- Rémi Coulom. Reinforcement Learning Using Neural Networks, with Applications to Motor Control. PhD thesis, Institut National Polytech- proach for allocation of resources in virtual networks. We nique de Grenoble, 2002. applied artificial neural networks to ensure that the allocation M. Chowdhury, M.R. Rahman, and R. Boutaba. Vineyard: Virtual net- agents perceive a continuous network state and take continuous work embedding algorithms with coordinated node and link mapping. resource allocation actions. We have been able to show through Networking, IEEE/ACM Transactions on, 20(1):206 –219, feb. 2012. simulation that our proposal improves the acceptance ratio of Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern virtual networks, which would translate into higher revenue Approach. Prentice Hall Press, Upper Saddle River, NJ, USA, 3rd for the infrastructure providers, while ensuring that the quality edition, 2009. of service to the virtual networks is not negatively affected. Stephen R. Garner. Weka: The waikato environment for knowledge analysis. In In Proc. of the New Zealand Computer Science Research In future, we intend to extend this proposal to the multi- Students Conference, pages 57–64, 1995. domain environment, which raises more questions especially Alberto Medina, Anukool Lakhina, Ibrahim Matta, and John Byers. with regard to cooperation and trust between agents as well as Brite: An approach to universal topology generation. In Proceedings of the Ninth International Symposium in Modeling, Analysis and Sim- the need for negotiation. We will also study the possibilities ulation of Computer and Telecommunication Systems, MASCOTS ’01, of implementing our proposal in a real network, say, by pages 346–353, Washington, DC, USA, 2001. IEEE Computer Society. setting up a server to collect VN requirements and user traffic Network Simulator 3. http://www.nsnam.org/. Accessed: 2014-02-17. characteristics, and using this in a prototype LAN. The CAIDA Anonymized Internet Traces 2012 - 20 December 2012, equinix sanjose.dirB.20121220-140100.UTC.anon.pcap.gz. http://www. caida.org/data/passive/passive 2012 dataset.xml. Accessed: 2014-02- ACKNOWLEDGMENT 17. This work was partly funded by FLAMINGO, a Network IBM ILOG CPLEX Optimizer. http://www-01.ibm.com/software/ of Excellence project (318488) supported by the European integration/optimization/cplex-optimizer/about/. Accessed: 2014-02-17. Commission under its Seventh Framework Programme. We also acknowledge support from the EVANS project (PIRSES- GA-2010-269323) and project TEC2012-38574-C02-02 from Ministerio de Economia y Competitividad. R EFERENCES A. Fischer, J.F. Botero, M. Till Beck, H. de Meer, and X. Hesselbach. Virtual network embedding: A survey. Communications Surveys Tuto- rials, IEEE, 15(4):1888–1906, Fourth 2013.

Use Quizgecko on...
Browser
Browser