Show simple item record

dc.contributor.authorSharma, Megha
dc.contributor.TAC-ChairGhosh, Diptesh
dc.contributor.TAC-MemberGajulapalli, Ravindra S.
dc.contributor.TAC-MemberBandyopadhyay, Tathagata
dc.date.accessioned2010-07-28T12:56:54Z
dc.date.available2010-07-28T12:56:54Z
dc.date.copyright2010
dc.date.issued2010
dc.identifier.urihttp://hdl.handle.net/11718/6535
dc.description.abstractA reliable network is a network in which network elements, i.e., arcs and/or nodes, are functional only with a pre-specified probability. Such networks are used to model real-life networks such as telecommunication networks and logistic networks. Problems on reliable networks are broadly of two types: reliable network evaluation problem, in which one is given a reliable network and is required to compute a performance measure for the network; and reliable network design problem, in which one is given a ground network and is required to choose a sub-network of the ground network which optimizes a pre-specified set of characteristics. Since all network elements are not functional at all points in time, conventional performance measures for networks are random variables for reliable networks. In this thesis, we consider reliable networks in which nodes are perfectly functional but arcs have a non-zero probability of failure. We study the maximum s-t flow problem on such reliable networks. In the literature, the performance of such a network is measured in terms of the expected value of maximum s-t flow through it, or in terms of the probability with which the value of maximum s-t flow through it exceeds a given threshold. While these performance measures provide information about the performance of networks in general, we argue that they do not provide important information about networks which are used to model critical applications. In such networks, the maximum s-t flow values in network states when a large number of links have failed (called downside states) are equally important. So, a central theme of this thesis is the use of risk measures that provide information about the performance of a reliable network in its downside states. We propose two such measures in this thesis, which we call the Downside Risk (DsR) measure, and the Conditional Downside Risk (CDsR) measure. The work in this thesis addresses four inter-related problems on reliable networks. The first problem is that of estimating expected maximum s-t flow values. In the literature, this is done through Monte-Carlo simulations in which a sufficient number of states of the network are generated, and the maximum s-t flow value in each network state is evaluated de novo. We propose a technique called warm start in which the maximum s-t flow value in a network state is computed using the residual network obtained after computing the maximum s-t flow value in a related state. We implement an algorithm called WARMSTART, using Monte Carlo simulations and incorporating the warm start idea. We observe that WARMSTART is three times faster than comparable existing algorithms for estimating expected maximum s-t flow values in reliable networks with up to 2500 arcs. The second problem that we address is that of computing the probability mass function of maximum s-t flow values in reliable networks where the probabilities of arcs being functional are high. In the literature, this is done through algorithms that either compute all s-t paths in a network, or compute all s-t cuts in a reliable network, or exhaustively enumerate maximum s-t flows in all network states. All these algorithms are computationally impractical for all but very small sized networks. We design and implement a lexicographic search algorithm called TOP-DOWN, incorporating warm start, which allows us to compute such probability mass functions for much larger networks within reasonable time. The third problem that we address is that of computing DsR and CDsR measures for reliable networks. Since these measures are new, there are no algorithms in the literature to compute these measures. We develop and implement an algorithm called BOTTOM-UP, which incorporates warm start as well as maximal sets of arc disjoint s-t cuts to compute DsR and CDsR measures for reliable networks with up to 110 arcs. The last problem that we address is that of designing reliable networks. We use multi-objective genetic algorithms and multiobjective particle swarm optimization algorithms in our design. We consider four objectives in the algorithms: the cost of establishing the network, the expected maximum s-t flow value in the network, its DsR, and its CDsR. The Pareto fronts generated by the algorithms are compared based on the average number of points in the Pareto fronts, the uniformity of distribution of the points in the Pareto fronts, the hyper volumes they dominate, and the average distance of the points in the Pareto front from a reference point. Our experiments are based on networks with up to 380 arcs. We observe that particle swarm optimization algorithms converge faster than genetic algorithms, although the Pareto fronts obtained from genetic algorithms are observed to be superior to those obtained from particle swarm optimization algorithms. We expect that the work in this thesis will be of use to designers of practical networks. It will allow them to understand the performance of existing networks better, and to design networks with better performance characteristics.en
dc.language.isoenen
dc.relation.ispartofseriesTH;2010/16
dc.subjectMathematicsen
dc.subjectStatististicsen
dc.subjectMathematics - Algorithmsen
dc.subjectNetworks design - Modelsen
dc.titleAlgorithms for evaluation and design of reliable networksen
dc.typeThesisen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record