Results

**11 - 15**of**15**### 1 Computational Analysis and Efficient Algorithms for Micro and Macro OFDMA Downlink Scheduling

"... Abstract — OFDMA is one of the most important modulation and access methods for the future mobile networks. Before transmitting a frame on the downlink, an OFDMA base station has to invoke an algorithm that determines which of the pending packets will be transmitted, what modulation should be used f ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract — OFDMA is one of the most important modulation and access methods for the future mobile networks. Before transmitting a frame on the downlink, an OFDMA base station has to invoke an algorithm that determines which of the pending packets will be transmitted, what modulation should be used for each of them, and how to construct the complex OFDMA frame matrix as a collection of rectangles that fit into a single matrix with fixed dimensions. We propose efficient algorithms, with performance guarantee, that solve this intricate OFDMA scheduling problem by breaking it down into two sub-problems, referred to as macro and micro scheduling. We analyze the computational complexity of these sub-problems and develop efficient algorithms for solving them. subchannel logical number p r e a m b l e downlink & uplink map FCH downlink & uplink map time (OFDMA symbol number) downlink burst #2 downlink burst #7 downlink downlink burst #4 burst #3 downlink burst #5 downlink burst #1 downlink burst #6

### Maximising the Quality of Influence ∗

, 2011

"... In percolation theory, vertices within a graph have a binary state: either active or inactive. Furthermore, a percolation process decides how activation spreads within the graph. Firstly, we propose and analyse a simple data-driven percolation process in which percolations are preliminarily learnt f ..."

Abstract
- Add to MetaCart

(Show Context)
In percolation theory, vertices within a graph have a binary state: either active or inactive. Furthermore, a percolation process decides how activation spreads within the graph. Firstly, we propose and analyse a simple data-driven percolation process in which percolations are preliminarily learnt from a graph with observed percolations. Secondly, we study a problem related to the one solved by Kempe et al. in [1]: given a percolation process, which k vertices should one choose in order to maximise the number of active vertices at the end of process? This question is important in many areas, ranging from viral marketing to the study of epidemic spread. We generalise the problem by considering activations in [0, 1], measuring the “quality ” of percolation, and percolation decays along edges in the percolation graph. For a varying cost of activating each vertex, we maximise the total activation whilst keeping within a budget L. The problem can be solved with a greedy algorithm with a guaranteed approximation quality, and furthermore we show its connection to the maximal coverage problem. The resulting algorithm is analysed empirically over predicted percolation graphs on a synthetic dataset and on a real dataset modelling information diffusion within a social network. 1

### Fixed-Complexity Piecewise Ellipsoidal Representation of the Continual Reachability Set Based on Ellipsoidal Techniques

"... Abstract—In a previous paper we showed how the continual reachability set can be numerically computed using efficient max-imal reachability tools. The resulting set is in general arbitrarily shaped and in practice possibly non-convex. Here, we present a fixed-complexity piecewise ellipsoidal under-a ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract—In a previous paper we showed how the continual reachability set can be numerically computed using efficient max-imal reachability tools. The resulting set is in general arbitrarily shaped and in practice possibly non-convex. Here, we present a fixed-complexity piecewise ellipsoidal under-approximation of the continual reachability set computed using ellipsoidal techniques. This provides a simple approximation of an otherwise relatively complicated set that can be used when a closed-form represen-tation is needed. We demonstrate the results on a problem of control of anesthesia. I.

### An EBMC-Based Approach to Selecting Types for Entity Filtering

"... The quantity of entities in the Linked Data is increasing rapidly. For entity search and browsing systems, filter-ing is very useful for users to find entities that they are interested in. Type is a kind of widely-used facet and can be easily obtained from knowledge bases, which enables to create fi ..."

Abstract
- Add to MetaCart

(Show Context)
The quantity of entities in the Linked Data is increasing rapidly. For entity search and browsing systems, filter-ing is very useful for users to find entities that they are interested in. Type is a kind of widely-used facet and can be easily obtained from knowledge bases, which enables to create filters by selecting at most K types of an entity collection. However, existing approaches of-ten fail to select high-quality type filters due to com-plex overlap between types. In this paper, we propose a novel type selection approach based upon Budgeted Maximum Coverage (BMC), which can achieve integral optimization for the coverage quality of type filters. Fur-thermore, we define a new optimization problem called Extended Budgeted Maximum Coverage (EBMC) and propose an EBMC-based approach, which enhances the BMC-based approach by incorporating the relevance between entities and types, so as to create sensible type filters. Our experimental results show that the EBMC-based approach performs best comparing with several representative approaches.

### Project-Team ROMA Which Verification for Soft Error Detection?

, 2015

"... Abstract: Many methods are available to detect silent errors in high-performance computing (HPC) applications. Each comes with a given cost and recall (fraction of all errors that are actually detected). The main contribution of this paper is to show which detector(s) to use, and to characterize the ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract: Many methods are available to detect silent errors in high-performance computing (HPC) applications. Each comes with a given cost and recall (fraction of all errors that are actually detected). The main contribution of this paper is to show which detector(s) to use, and to characterize the optimal computational pattern for the application: how many detectors of each type to use, together with the length of the work segment that precedes each of them. We conduct a comprehensive complexity analysis of this optimization problem, showing NP-completeness and designing an FPTAS (Fully Polynomial-Time Approximation Scheme). On the practical side, we provide a greedy algorithm whose performance is shown to be close to the optimal for a realistic set of evaluation scenarios. Key-words: fault tolerance, high performance computing, silent data corruption, partial verifica-tion, supercomputer, exascale.