Hamilton Institute Seminars (iPod / small)
Seminars,Talks,Presentations,Hamilton,Institute
Public seminars held at the Hamilton Institute, NUI Maynooth, Ireland
Hamilton Institute
The Hamilton Institute is a multidisciplinary research centre established at the National University of Ireland, Maynooth in November 2001. The Institute seeks to provide a bridge between mathematics and its applications in ICT and biology.
In this podcast feed, we make accessible some of the best seminars held by members of the Hamilton Institute, visitors or guest speakers.
Futhermore, it will also contain the lectures give as part of the 'Network Mathematics Graduate Programme'.
The video files contained in this feed should be fully compatible with all video capable iPods.
http://www.hamilton.ie/seminars.htm
The Hamilton Institute is a multidisciplinary research centre established at the National University of Ireland, Maynooth in November 2001. The Institute seeks to provide a bridge between mathematics and its applications in ICT and biology.
In this podcast feed, we make accessible some of the best seminars held by members of the Hamilton Institute, visitors or guest speakers.
Futhermore, it will also contain the lectures give as part of the 'Network Mathematics Graduate Programme'.
The video files contained in this feed should be fully compatible with all video capable iPods and newer devices.
Thu, 16 May 2024 21:53:19 +0100
enGB
© 20082011  All rights reserved.
http://feeds2.feedburner.com/HamiltonInstituteSeminarsIpod
no
florian@knorn.org (Florian Knorn)
http://www.hamilton.ie/seminars/videos/itunes_logo.jpg
Hamilton Institute Seminars (iPod / small)
http://www.hamilton.ie/seminars.htm
dirCast v0.7, modified by Florian Knorn
florian@knorn.org (Florian Knorn)
60
florian@knorn.orgHamilton Institute
Periodicity of Matrix Powers in Max Algebra
http://www.hamilton.ie/seminars/videos/66s_sergeev_lo.mp4
http://www.hamilton.ie/seminars/videos/66s_sergeev_lo.mp4
Wed, 07 Aug 2013 00:01:06 +0100
Speaker:
Dr. S. Sergeev
Abstract:
It is well known that the sequence of maxalgebraic powers of irreducible nonnegative matrices is ultimately periodic. We express this periodicity in terms of CSRrepresentations and give new bounds on the transient time after which the maxalgebraic powers become periodic.
Dr. S. Sergeev
no
55:20
florian@knorn.org (Hamilton Institute)Speaker: Dr. S. Sergeev Abstract: It is well known that the sequence of maxalgebraic powers of irreducible nonnegative matrices is ultimately periodic. We express this periodicity in terms of CSRrepresentations and give new bounds on the transient time after which the maxalgebraic powers become periodic.Speaker: Dr. S. Sergeev Abstract: It is well known that the sequence of maxalgebraic powers of irreducible nonnegative matrices is ultimately periodic. We express this periodicity in terms of CSRrepresentations and give new bounds on the transient time after which the maxalgebraic powers become periodic.Seminars,Talks,Presentations,Hamilton,Institute

Very High Speed Networking in VMs and Bare Metal
http://www.hamilton.ie/seminars/videos/65l_rizzo_lo.mp4
http://www.hamilton.ie/seminars/videos/65l_rizzo_lo.mp4
Fri, 05 Jul 2013 00:01:05 +0100
Speaker:
Prof. L. Rizzo
Abstract:
In this talk I will give a survey of solutions and tools that we have developed in recent years to achieve extremely high packet processing rates in commodity operating systems, running on bare metal and on virtual machines.
Our NETMAP framework supports processing of minimum size frames from user space at 10 Gbits per second (14.88 Mpps) with very small CPU usage. Netmap is hardware independent, supports multiple NIC types, and it does not require IOMMU or expose critical resources (e.g. device registers) to userspace. A libpcap library running on top of netmap gives instant acceleration to pcap clients without even the need to recompile applications.
VALE is a software switch using the netmap API, which delivers over 20 Mpps per port, or 70 Gbits per second with 1500 byte packets. Originally designed to interconnect virtual machines, VALE is actually very convenient also as a testing tool and as a high speed IPC mechanism.
More recently we have extended QEMU, and with a few small changes (using VAEL as a switch, paravirtualizing the e1000 emulator, and with small device driver enhancements), we reached guest to guest communication speeds of over 1 Mpps (with socket based clients) and 5 Mpps (with netmap based clients).
NETMAP and VALE are small kernel modules, part of standard FreeBSD and also available as addon for Linux. QEMU extensions are also available from the author and are being submitted to the qemudevel list for inclusion in the standard distributions.
Prof. L. Rizzo
no
1:12:05
florian@knorn.org (Hamilton Institute)Speaker: Prof. L. Rizzo Abstract: In this talk I will give a survey of solutions and tools that we have developed in recent years to achieve extremely high packet processing rates in commodity operating systems, running on bare metal and on virtual machines. Our NETMAP framework supports processing of minimum size frames from user space at 10 Gbits per second (14.88 Mpps) with very small CPU usage. Netmap is hardware independent, supports multiple NIC types, and it does not require IOMMU or expose critical resources (e.g. device registers) to userspace. A libpcap library running on top of netmap gives instant acceleration to pcap clients without even the need to recompile applications. VALE is a software switch using the netmap API, which delivers over 20 Mpps per port, or 70 Gbits per second with 1500 byte packets. Originally designed to interconnect virtual machines, VALE is actually very convenient also as a testing tool and as a high speed IPC mechanism. More recently we have extended QEMU, and with a few small changes (using VAEL as a switch, paravirtualizing the e1000 emulator, and with small device driver enhancements), we reached guest to guest communication speeds of over 1 Mpps (with socket based clients) and 5 Mpps (with netmap based clients). NETMAP and VALE are small kernel modules, part of standard FreeBSD and also available as addon for Linux. QEMU extensions are also available from the author and are being submitted to the qemudevel list for inclusion in the standard distributions.Speaker: Prof. L. Rizzo Abstract: In this talk I will give a survey of solutions and tools that we have developed in recent years to achieve extremely high packet processing rates in commodity operating systems, running on bare metal and on virtual machines. Our NETMAP framework supports processing of minimum size frames from user space at 10 Gbits per second (14.88 Mpps) with very small CPU usage. Netmap is hardware independent, supports multiple NIC types, and it does not require IOMMU or expose critical resources (e.g. device registers) to userspace. A libpcap library running on top of netmap gives instant acceleration to pcap clients without even the need to recompile applications. VALE is a software switch using the netmap API, which delivers over 20 Mpps per port, or 70 Gbits per second with 1500 byte packets. Originally designed to interconnect virtual machines, VALE is actually very convenient also as a testing tool and as a high speed IPC mechanism. More recently we have extended QEMU, and with a few small changes (using VAEL as a switch, paravirtualizing the e1000 emulator, and with small device driver enhancements), we reached guest to guest communication speeds of over 1 Mpps (with socket based clients) and 5 Mpps (with netmap based clients). NETMAP and VALE are small kernel modules, part of standard FreeBSD and also available as addon for Linux. QEMU extensions are also available from the author and are being submitted to the qemudevel list for inclusion in the standard distributions.Seminars,Talks,Presentations,Hamilton,Institute

ROMA: Random Overlook Mastering ATFM
http://www.hamilton.ie/seminars/videos/64c_lancia_lo.mp4
http://www.hamilton.ie/seminars/videos/64c_lancia_lo.mp4
Thu, 21 Mar 2013 00:01:04 +0000
Speaker:
C. Lancia
Abstract:
Consider the arrival process defined by t_i=i + \xi_i, where \xi_i are i.i.d random variables. First introduced in the 50's, this arrival process is of remarkable importance in Air Traffic Flow Management and other transportation systems, where scheduled arrivals are intrinsically subject to random variations; other frameworks where this model has proved to be capable of a good description of actual job arrivals include health care and crane handling systems. This talk is ideally divided in two parts.
In the first half, I will show through numerical simulations two of the most important features of the model, namely, the insensitivity with respect to the nature (i.e. the law) of the delays \xi_i and the extremely valuable goodness of fit of simulated queue length distribution against the empirical distribution obtained from actual arrivals at London Heathrow airport. Further, I will show that the congestion related to this process is very different from the congestion of a Poisson process. This is due to the negative autocorrelation of the process.
In the second part, I will restrict the analysis to the case where the delays \xi_i are exponentially distributed. In this context, I will show some preliminary results on a possible strategy to find the stationary distribution of the queue length using a bivariate generating function.
C. Lancia
no
39:16
florian@knorn.org (Hamilton Institute)Speaker: C. Lancia Abstract: Consider the arrival process defined by t_i=i + \xi_i, where \xi_i are i.i.d random variables. First introduced in the 50's, this arrival process is of remarkable importance in Air Traffic Flow Management and other transportation systems, where scheduled arrivals are intrinsically subject to random variations; other frameworks where this model has proved to be capable of a good description of actual job arrivals include health care and crane handling systems. This talk is ideally divided in two parts. In the first half, I will show through numerical simulations two of the most important features of the model, namely, the insensitivity with respect to the nature (i.e. the law) of the delays \xi_i and the extremely valuable goodness of fit of simulated queue length distribution against the empirical distribution obtained from actual arrivals at London Heathrow airport. Further, I will show that the congestion related to this process is very different from the congestion of a Poisson process. This is due to the negative autocorrelation of the process. In the second part, I will restrict the analysis to the case where the delays \xi_i are exponentially distributed. In this context, I will show some preliminary results on a possible strategy to find the stationary distribution of the queue length using a bivariate generating function.Speaker: C. Lancia Abstract: Consider the arrival process defined by t_i=i + \xi_i, where \xi_i are i.i.d random variables. First introduced in the 50's, this arrival process is of remarkable importance in Air Traffic Flow Management and other transportation systems, where scheduled arrivals are intrinsically subject to random variations; other frameworks where this model has proved to be capable of a good description of actual job arrivals include health care and crane handling systems. This talk is ideally divided in two parts. In the first half, I will show through numerical simulations two of the most important features of the model, namely, the insensitivity with respect to the nature (i.e. the law) of the delays \xi_i and the extremely valuable goodness of fit of simulated queue length distribution against the empirical distribution obtained from actual arrivals at London Heathrow airport. Further, I will show that the congestion related to this process is very different from the congestion of a Poisson process. This is due to the negative autocorrelation of the process. In the second part, I will restrict the analysis to the case where the delays \xi_i are exponentially distributed. In this context, I will show some preliminary results on a possible strategy to find the stationary distribution of the queue length using a bivariate generating function.Seminars,Talks,Presentations,Hamilton,Institute

MachinetoMachine in Smart Cities & Smart Grids Vision, Technology & Applications
http://www.hamilton.ie/seminars/videos/63m_dohler_lo.mp4
http://www.hamilton.ie/seminars/videos/63m_dohler_lo.mp4
Mon, 21 Jan 2013 00:01:03 +0000
Speaker:
Dr. M. Dohler
Abstract:
The unprecedented communication paradigm of machinetomachine (M2M), facilitating 24/7 ultrareliable connectivity between a prior unseen number of automated devices, is currently gripping both industrial as well as academic communities. Whilst applications are diverse, the inhome market is of particular interest since undergoing a fundamental shift of machinetohuman communications towards fully automatized M2M. The aim of this presentation is thus to provide academic, technical and industrial insights into latest key aspects of wireless M2M networks, with particular application to the emerging smart city and smart grid verticals.
Notably, I will provide an introduction to the particularities of M2M systems. Architectural, technical and privacy requirements, and thus applicable technologies will be discussed. Notably, we will dwell on the capillary and cellular embodiments of M2M in smart homes. The focus of capillary M2M, useful for realtime data gathering in homes, will be on IEEE (.15.4e) and IETF (6LoWPAN, ROLL, COAP) standards compliant lowpower multihop networking designs; furthermore, for the first time, low power Wifi will be dealt with and positioned into the ecosystem of capillary M2M. The focus of cellular M2M will be on latest activities, status and trends in leading M2M standardization bodies with technical focus on ETSI M2M and 3GPP LTEMTC. Open technical challenges, along with the industry’s vision on M2M and its shift of industries, will be discussed during the talk.
Dr. M. Dohler
no
1:18:04
florian@knorn.org (Hamilton Institute)Speaker: Dr. M. Dohler Abstract: The unprecedented communication paradigm of machinetomachine (M2M), facilitating 24/7 ultrareliable connectivity between a prior unseen number of automated devices, is currently gripping both industrial as well as academic communities. Whilst applications are diverse, the inhome market is of particular interest since undergoing a fundamental shift of machinetohuman communications towards fully automatized M2M. The aim of this presentation is thus to provide academic, technical and industrial insights into latest key aspects of wireless M2M networks, with particular application to the emerging smart city and smart grid verticals. Notably, I will provide an introduction to the particularities of M2M systems. Architectural, technical and privacy requirements, and thus applicable technologies will be discussed. Notably, we will dwell on the capillary and cellular embodiments of M2M in smart homes. The focus of capillary M2M, useful for realtime data gathering in homes, will be on IEEE (.15.4e) and IETF (6LoWPAN, ROLL, COAP) standards compliant lowpower multihop networking designs; furthermore, for the first time, low power Wifi will be dealt with and positioned into the ecosystem of capillary M2M. The focus of cellular M2M will be on latest activities, status and trends in leading M2M standardization bodies with technical focus on ETSI M2M and 3GPP LTEMTC. Open technical challenges, along with the industry’s vision on M2M and its shift of industries, will be discussed during the talk.Speaker: Dr. M. Dohler Abstract: The unprecedented communication paradigm of machinetomachine (M2M), facilitating 24/7 ultrareliable connectivity between a prior unseen number of automated devices, is currently gripping both industrial as well as academic communities. Whilst applications are diverse, the inhome market is of particular interest since undergoing a fundamental shift of machinetohuman communications towards fully automatized M2M. The aim of this presentation is thus to provide academic, technical and industrial insights into latest key aspects of wireless M2M networks, with particular application to the emerging smart city and smart grid verticals. Notably, I will provide an introduction to the particularities of M2M systems. Architectural, technical and privacy requirements, and thus applicable technologies will be discussed. Notably, we will dwell on the capillary and cellular embodiments of M2M in smart homes. The focus of capillary M2M, useful for realtime data gathering in homes, will be on IEEE (.15.4e) and IETF (6LoWPAN, ROLL, COAP) standards compliant lowpower multihop networking designs; furthermore, for the first time, low power Wifi will be dealt with and positioned into the ecosystem of capillary M2M. The focus of cellular M2M will be on latest activities, status and trends in leading M2M standardization bodies with technical focus on ETSI M2M and 3GPP LTEMTC. Open technical challenges, along with the industry’s vision on M2M and its shift of industries, will be discussed during the talk.Seminars,Talks,Presentations,Hamilton,Institute

State Constrained Optimal Control
http://www.hamilton.ie/seminars/videos/62r_vinter_lo.mp4
http://www.hamilton.ie/seminars/videos/62r_vinter_lo.mp4
Thu, 29 Nov 2012 00:01:02 +0000
Speaker:
Prof. R. Vinter
Abstract:
Estimates on the distance of a nominal state trajectory from the set of state trajectories that are confined to a closed set have an important unifying role in optimal control theory. They can be used to establish nondegeneracy of optimality conditions such as the Pontryagin Maximum Principle, to show that the value function describing the sensitivity of the minimum cost to changes of the initial condition is characterized as a unique generalized solution to the Hamilton Jacobi equation, and for numerous other purposes. We discuss the validity of various presumed distance estimates and their implications, recent counterexamples illustrating some unexpected pathologies and pose some open questions.
Prof. R. Vinter
no
59:16
florian@knorn.org (Hamilton Institute)Speaker: Prof. R. Vinter Abstract: Estimates on the distance of a nominal state trajectory from the set of state trajectories that are confined to a closed set have an important unifying role in optimal control theory. They can be used to establish nondegeneracy of optimality conditions such as the Pontryagin Maximum Principle, to show that the value function describing the sensitivity of the minimum cost to changes of the initial condition is characterized as a unique generalized solution to the Hamilton Jacobi equation, and for numerous other purposes. We discuss the validity of various presumed distance estimates and their implications, recent counterexamples illustrating some unexpected pathologies and pose some open questions.Speaker: Prof. R. Vinter Abstract: Estimates on the distance of a nominal state trajectory from the set of state trajectories that are confined to a closed set have an important unifying role in optimal control theory. They can be used to establish nondegeneracy of optimality conditions such as the Pontryagin Maximum Principle, to show that the value function describing the sensitivity of the minimum cost to changes of the initial condition is characterized as a unique generalized solution to the Hamilton Jacobi equation, and for numerous other purposes. We discuss the validity of various presumed distance estimates and their implications, recent counterexamples illustrating some unexpected pathologies and pose some open questions.Seminars,Talks,Presentations,Hamilton,Institute

Effective Information Delivery Through Opportunistic Replication in Wireless Networks
http://www.hamilton.ie/seminars/videos/61l_tassiulas_lo.mp4
http://www.hamilton.ie/seminars/videos/61l_tassiulas_lo.mp4
Wed, 28 Nov 2012 00:01:01 +0000
Speaker:
Prof. L. Tassiulas
Abstract:
Increased replication of information is observed in modern wireless networks either in preplanned content replication schemes or through opportunistic caching in intermediate relay nodes as the information flows to the final destination or through overhearing of broadcast information in the wireless channel. In all cases the available other node information might be used to effectively increase the efficiency of the information delivery process. We will consider first an information theoretic perspective and present a scheme that exploits the opportunistically available overheard information to achieve the Shannon capacity of the broadcast erasure channel. Then we will consider information transport in a multihop flat wireless network and present schemes for spatial information replication based on popularity, in association with anycasting routing schemes, that achieve asymptotically optimal performance.
Prof. L. Tassiulas
no
1:18:07
florian@knorn.org (Hamilton Institute)Speaker: Prof. L. Tassiulas Abstract: Increased replication of information is observed in modern wireless networks either in preplanned content replication schemes or through opportunistic caching in intermediate relay nodes as the information flows to the final destination or through overhearing of broadcast information in the wireless channel. In all cases the available other node information might be used to effectively increase the efficiency of the information delivery process. We will consider first an information theoretic perspective and present a scheme that exploits the opportunistically available overheard information to achieve the Shannon capacity of the broadcast erasure channel. Then we will consider information transport in a multihop flat wireless network and present schemes for spatial information replication based on popularity, in association with anycasting routing schemes, that achieve asymptotically optimal performance.Speaker: Prof. L. Tassiulas Abstract: Increased replication of information is observed in modern wireless networks either in preplanned content replication schemes or through opportunistic caching in intermediate relay nodes as the information flows to the final destination or through overhearing of broadcast information in the wireless channel. In all cases the available other node information might be used to effectively increase the efficiency of the information delivery process. We will consider first an information theoretic perspective and present a scheme that exploits the opportunistically available overheard information to achieve the Shannon capacity of the broadcast erasure channel. Then we will consider information transport in a multihop flat wireless network and present schemes for spatial information replication based on popularity, in association with anycasting routing schemes, that achieve asymptotically optimal performance.Seminars,Talks,Presentations,Hamilton,Institute

Dynamics of Some Cholera Models
http://www.hamilton.ie/seminars/videos/60p_vandendriessche_lo.mp4
http://www.hamilton.ie/seminars/videos/60p_vandendriessche_lo.mp4
Thu, 22 Nov 2012 00:01:00 +0000
Speaker:
Prof. P. van den Driessche
Abstract:
The World Health Organization estimates that there are 3 to 5 million cholera cases per year with 100 thousand deaths spread over 40 to 50 countries. For example, there has been a recent cholera outbreak in Haiti. Cholera is a bacterial disease caused by the bacterium Vibrio cholerae, which can be transmitted to humans directly by person to person contact or indirectly via the environment (mainly through contaminated water). To better understand the dynamics of cholera, ageneral ordinary differential equation compartmental model is formulated that incorporates these two transmission pathways as well as multiple infection stages and pathogen states. In the model analysis, some matrix theory is used to derive a basic reproduction number, and Lyapunov functions are used to show that this number gives a sharp threshold determining whether cholera dies out or becomes endemic. In the absence of recruitment and death, a final size equation or inequality is derived, and simulations illustrate how assumptions on cholera transmission affect the final size of the epidemic. Further models that incorporate temporary immunity and hyperinfectivity using distributed delays are formulated, and numerical simulations show that oscillatory solutions may occur for parameter values taken from cholera data in the literature.
Prof. P. van den Driessche
no
1:01:22
florian@knorn.org (Hamilton Institute)Speaker: Prof. P. van den Driessche Abstract: The World Health Organization estimates that there are 3 to 5 million cholera cases per year with 100 thousand deaths spread over 40 to 50 countries. For example, there has been a recent cholera outbreak in Haiti. Cholera is a bacterial disease caused by the bacterium Vibrio cholerae, which can be transmitted to humans directly by person to person contact or indirectly via the environment (mainly through contaminated water). To better understand the dynamics of cholera, ageneral ordinary differential equation compartmental model is formulated that incorporates these two transmission pathways as well as multiple infection stages and pathogen states. In the model analysis, some matrix theory is used to derive a basic reproduction number, and Lyapunov functions are used to show that this number gives a sharp threshold determining whether cholera dies out or becomes endemic. In the absence of recruitment and death, a final size equation or inequality is derived, and simulations illustrate how assumptions on cholera transmission affect the final size of the epidemic. Further models that incorporate temporary immunity and hyperinfectivity using distributed delays are formulated, and numerical simulations show that oscillatory solutions may occur for parameter values taken from cholera data in the literature.Speaker: Prof. P. van den Driessche Abstract: The World Health Organization estimates that there are 3 to 5 million cholera cases per year with 100 thousand deaths spread over 40 to 50 countries. For example, there has been a recent cholera outbreak in Haiti. Cholera is a bacterial disease caused by the bacterium Vibrio cholerae, which can be transmitted to humans directly by person to person contact or indirectly via the environment (mainly through contaminated water). To better understand the dynamics of cholera, ageneral ordinary differential equation compartmental model is formulated that incorporates these two transmission pathways as well as multiple infection stages and pathogen states. In the model analysis, some matrix theory is used to derive a basic reproduction number, and Lyapunov functions are used to show that this number gives a sharp threshold determining whether cholera dies out or becomes endemic. In the absence of recruitment and death, a final size equation or inequality is derived, and simulations illustrate how assumptions on cholera transmission affect the final size of the epidemic. Further models that incorporate temporary immunity and hyperinfectivity using distributed delays are formulated, and numerical simulations show that oscillatory solutions may occur for parameter values taken from cholera data in the literature.Seminars,Talks,Presentations,Hamilton,Institute

Distributed Opportunistic Scheduling: A Control Theoretic Approach
http://www.hamilton.ie/seminars/videos/59a_banchs_lo.mp4
http://www.hamilton.ie/seminars/videos/59a_banchs_lo.mp4
Wed, 10 Oct 2012 00:00:59 +0100
Speaker:
Prof. A. Banchs
Abstract:
Distributed Opportunistic Scheduling (DOS) techniques have been recently proposed to improve the throughput performance of wireless networks. With DOS, each station contends for the channel with a certain access probability. If a contention is successful, the station measures the channel conditions and transmits in case the channel quality is above a certain threshold. Otherwise, the station does not use the transmission opportunity, allowing all stations to recontend. A key challenge with DOS is to design a distributed algorithm that optimally adjusts the access probability and the threshold of each station. To address this challenge, in this paper we first compute the configuration of these two parameters that jointly optimizes throughput performance in terms of proportional fairness. Then, we propose an adaptive algorithm based on control theory that converges to the desired point of operation. Finally, we conduct a control theoretic analysis of the algorithm to find a setting for its parameters that provides a good tradeoff between stability and speed of convergence. Simulation results validate the design of the proposed mechanism and confirm its advantages over previous proposals.
Prof. A. Banchs
no
59:32
florian@knorn.org (Hamilton Institute)Speaker: Prof. A. Banchs Abstract: Distributed Opportunistic Scheduling (DOS) techniques have been recently proposed to improve the throughput performance of wireless networks. With DOS, each station contends for the channel with a certain access probability. If a contention is successful, the station measures the channel conditions and transmits in case the channel quality is above a certain threshold. Otherwise, the station does not use the transmission opportunity, allowing all stations to recontend. A key challenge with DOS is to design a distributed algorithm that optimally adjusts the access probability and the threshold of each station. To address this challenge, in this paper we first compute the configuration of these two parameters that jointly optimizes throughput performance in terms of proportional fairness. Then, we propose an adaptive algorithm based on control theory that converges to the desired point of operation. Finally, we conduct a control theoretic analysis of the algorithm to find a setting for its parameters that provides a good tradeoff between stability and speed of convergence. Simulation results validate the design of the proposed mechanism and confirm its advantages over previous proposals.Speaker: Prof. A. Banchs Abstract: Distributed Opportunistic Scheduling (DOS) techniques have been recently proposed to improve the throughput performance of wireless networks. With DOS, each station contends for the channel with a certain access probability. If a contention is successful, the station measures the channel conditions and transmits in case the channel quality is above a certain threshold. Otherwise, the station does not use the transmission opportunity, allowing all stations to recontend. A key challenge with DOS is to design a distributed algorithm that optimally adjusts the access probability and the threshold of each station. To address this challenge, in this paper we first compute the configuration of these two parameters that jointly optimizes throughput performance in terms of proportional fairness. Then, we propose an adaptive algorithm based on control theory that converges to the desired point of operation. Finally, we conduct a control theoretic analysis of the algorithm to find a setting for its parameters that provides a good tradeoff between stability and speed of convergence. Simulation results validate the design of the proposed mechanism and confirm its advantages over previous proposals.Seminars,Talks,Presentations,Hamilton,Institute

Largescale urban vehicular networks: mobility and connectivity
http://www.hamilton.ie/seminars/videos/58m_fiore_lo.mp4
http://www.hamilton.ie/seminars/videos/58m_fiore_lo.mp4
Fri, 05 Oct 2012 00:00:58 +0100
Speaker:
Dr. M. Fiore
Abstract:
Vehicular networks are large scale communication systems that exploit wireless technologies to interconnect moving cars. Vehicular networks are envisioned to provide drivers with real time information on potential dangers, on road traffic conditions, and on travel times, thus improving road safety and traffic efficiency. Direct vehicletovehicle communication is also foreseen to enable nonsafety applications, such as pervasive urban sensing and fast data dissemination throughout metropolitan regions. The quantity and relevance of potential usages make pervasive intervehicular communication one of the highest impact future applications of the wireless technology, which explains the growing interest of both industry and academy towards this research field. In this talk, we will address two intertwined topics in vehicular networks: the modeling of vehicular mobility in large scale urban environments and the topological characterization of the vehicular network built on top of such a mobility. Both are fundamental, yet often oversought, aspects of vehicular networking, defining the strengths and weaknesses of the vehicletovehicle communication system and dictating the rules for the design of dedicated protocols.
Dr. M. Fiore
no
52:44
florian@knorn.org (Hamilton Institute)Speaker: Dr. M. Fiore Abstract: Vehicular networks are large scale communication systems that exploit wireless technologies to interconnect moving cars. Vehicular networks are envisioned to provide drivers with real time information on potential dangers, on road traffic conditions, and on travel times, thus improving road safety and traffic efficiency. Direct vehicletovehicle communication is also foreseen to enable nonsafety applications, such as pervasive urban sensing and fast data dissemination throughout metropolitan regions. The quantity and relevance of potential usages make pervasive intervehicular communication one of the highest impact future applications of the wireless technology, which explains the growing interest of both industry and academy towards this research field. In this talk, we will address two intertwined topics in vehicular networks: the modeling of vehicular mobility in large scale urban environments and the topological characterization of the vehicular network built on top of such a mobility. Both are fundamental, yet often oversought, aspects of vehicular networking, defining the strengths and weaknesses of the vehicletovehicle communication system and dictating the rules for the design of dedicated protocols.Speaker: Dr. M. Fiore Abstract: Vehicular networks are large scale communication systems that exploit wireless technologies to interconnect moving cars. Vehicular networks are envisioned to provide drivers with real time information on potential dangers, on road traffic conditions, and on travel times, thus improving road safety and traffic efficiency. Direct vehicletovehicle communication is also foreseen to enable nonsafety applications, such as pervasive urban sensing and fast data dissemination throughout metropolitan regions. The quantity and relevance of potential usages make pervasive intervehicular communication one of the highest impact future applications of the wireless technology, which explains the growing interest of both industry and academy towards this research field. In this talk, we will address two intertwined topics in vehicular networks: the modeling of vehicular mobility in large scale urban environments and the topological characterization of the vehicular network built on top of such a mobility. Both are fundamental, yet often oversought, aspects of vehicular networking, defining the strengths and weaknesses of the vehicletovehicle communication system and dictating the rules for the design of dedicated protocols.Seminars,Talks,Presentations,Hamilton,Institute

Learning Cell Cycle Variability at the
Level of each phase
http://www.hamilton.ie/seminars/videos/57t_weber_lo.mp4
http://www.hamilton.ie/seminars/videos/57t_weber_lo.mp4
Thu, 27 Sep 2012 00:00:57 +0100
Speaker:
Dr. T. Weber
Abstract:
Intercellular variability in the duration of the cell cycle is a well documented phenomena which has been integrated into mathematical models of cell proliferation since the 70’s. Here I present a minimalist stochastic cell cycle model that allows for intercellular variability at the level of each single phase, i.e. G1, S and G2M. Fitting this model to flow cytometry data from 5bromo2'deoxyuridine (BrdU) pulse labeling experiments of two different cell lines shows that the mean field predictions mimic closely the measured average kinetics. However as indicated by bayesian inference, scenarios with deterministic or purely stochastic waiting times especially in the G1 phase seem to explain the data equally well. To resolve this uncertainty a novel experimental proto col is proposed able to provide sufficient information about cell kinetics to fully determine both the intercellular average and variance of the duration of each of the phases. Finally I present a case in which this model is extended in order to estimate cell cycle parameters in germinal centers. The latter play a central role in the generation of highly effective antibodies that protect our body against invading pathogens.
Dr. T. Weber
no
0:43:06
florian@knorn.org (Hamilton Institute)Speaker: Dr. T. Weber Abstract: Intercellular variability in the duration of the cell cycle is a well documented phenomena which has been integrated into mathematical models of cell proliferation since the 70’s. Here I present a minimalist stochastic cell cycle model that allows for intercellular variability at the level of each single phase, i.e. G1, S and G2M. Fitting this model to flow cytometry data from 5bromo2'deoxyuridine (BrdU) pulse labeling experiments of two different cell lines shows that the mean field predictions mimic closely the measured average kinetics. However as indicated by bayesian inference, scenarios with deterministic or purely stochastic waiting times especially in the G1 phase seem to explain the data equally well. To resolve this uncertainty a novel experimental proto col is proposed able to provide sufficient information about cell kinetics to fully determine both the intercellular average and variance of the duration of each of the phases. Finally I present a case in which this model is extended in order to estimate cell cycle parameters in germinal centers. The latter play a central role in the generation of highly effective antibodies that protect our body against invading pathogens.Speaker: Dr. T. Weber Abstract: Intercellular variability in the duration of the cell cycle is a well documented phenomena which has been integrated into mathematical models of cell proliferation since the 70’s. Here I present a minimalist stochastic cell cycle model that allows for intercellular variability at the level of each single phase, i.e. G1, S and G2M. Fitting this model to flow cytometry data from 5bromo2'deoxyuridine (BrdU) pulse labeling experiments of two different cell lines shows that the mean field predictions mimic closely the measured average kinetics. However as indicated by bayesian inference, scenarios with deterministic or purely stochastic waiting times especially in the G1 phase seem to explain the data equally well. To resolve this uncertainty a novel experimental proto col is proposed able to provide sufficient information about cell kinetics to fully determine both the intercellular average and variance of the duration of each of the phases. Finally I present a case in which this model is extended in order to estimate cell cycle parameters in germinal centers. The latter play a central role in the generation of highly effective antibodies that protect our body against invading pathogens.Seminars,Talks,Presentations,Hamilton,Institute

EPT functions: Nonnegativity analysis, Levy processes and Financial applications
http://www.hamilton.ie/seminars/videos/56b_hanzon_lo.mp4
http://www.hamilton.ie/seminars/videos/56b_hanzon_lo.mp4
Mon, 17 Sep 2012 00:00:56 +0100
Speaker:
Prof. B. Hanzon
Abstract:
Exponential Polynomial Trigonometric (EPT) functions are being considered as probability density functions. A specific matrixvector representation is proposed for doing calculations with these functions. We investigate when these functions are nonnegative and under which conditions the density functions are infinitely divisiblein which case there is an associated Levy process. Application to option price computations in finance will be presented.
For background information on this topic the website www.2ept.com can be considered.
Prof. B. Hanzon
no
0:59:22
florian@knorn.org (Hamilton Institute)Speaker: Prof. B. Hanzon Abstract: Exponential Polynomial Trigonometric (EPT) functions are being considered as probability density functions. A specific matrixvector representation is proposed for doing calculations with these functions. We investigate when these functions are nonnegative and under which conditions the density functions are infinitely divisiblein which case there is an associated Levy process. Application to option price computations in finance will be presented. For background information on this topic the website www.2ept.com can be considered.Speaker: Prof. B. Hanzon Abstract: Exponential Polynomial Trigonometric (EPT) functions are being considered as probability density functions. A specific matrixvector representation is proposed for doing calculations with these functions. We investigate when these functions are nonnegative and under which conditions the density functions are infinitely divisiblein which case there is an associated Levy process. Application to option price computations in finance will be presented. For background information on this topic the website www.2ept.com can be considered.Seminars,Talks,Presentations,Hamilton,Institute

Playing with Standards: the IEEE 802.11 case
http://www.hamilton.ie/seminars/videos/55f_gringoli_lo.mp4
http://www.hamilton.ie/seminars/videos/55f_gringoli_lo.mp4
Wed, 12 Sep 2012 00:00:55 +0100
Speaker:
Dr. F. Gringoli
Abstract:
Experimenting in the field is a key activity for the evolution of the modern Internet: this is especially true for radio access protocols like IEEE 802.11 that are usually affected by unpredictable issues due to noise, competing stations and interference. Here we introduce OpenFWWF, an opensource firmware that implements a fully compliant 802.11 MAC on offtheshelf WiFi boards: we show how it can be used in conjunction with the Linux kernel to play with the wireless stack. To this end we further demonstrate how we can easily customize the basic DCF access firmware to explore either performance boosting variations or to measure physical properties of the wireless channel.
Dr. F. Gringoli
no
1:02:47
florian@knorn.org (Hamilton Institute)Speaker: Dr. F. Gringoli Abstract: Experimenting in the field is a key activity for the evolution of the modern Internet: this is especially true for radio access protocols like IEEE 802.11 that are usually affected by unpredictable issues due to noise, competing stations and interference. Here we introduce OpenFWWF, an opensource firmware that implements a fully compliant 802.11 MAC on offtheshelf WiFi boards: we show how it can be used in conjunction with the Linux kernel to play with the wireless stack. To this end we further demonstrate how we can easily customize the basic DCF access firmware to explore either performance boosting variations or to measure physical properties of the wireless channel.Speaker: Dr. F. Gringoli Abstract: Experimenting in the field is a key activity for the evolution of the modern Internet: this is especially true for radio access protocols like IEEE 802.11 that are usually affected by unpredictable issues due to noise, competing stations and interference. Here we introduce OpenFWWF, an opensource firmware that implements a fully compliant 802.11 MAC on offtheshelf WiFi boards: we show how it can be used in conjunction with the Linux kernel to play with the wireless stack. To this end we further demonstrate how we can easily customize the basic DCF access firmware to explore either performance boosting variations or to measure physical properties of the wireless channel.Seminars,Talks,Presentations,Hamilton,Institute

In Search of Optimality: Network Coding for Wireless Networks
http://www.hamilton.ie/seminars/videos/54m_chaudry_lo.mp4
http://www.hamilton.ie/seminars/videos/54m_chaudry_lo.mp4
Wed, 29 Aug 2012 00:00:54 +0100
Speaker:
Dr. M. A. Chaudry
Abstract:
Network coding has gained significant interest from the research community since the first paper by Alshwede et al., in 2000. Network coding techniques can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. We focus on network coding for wireless networks; specifically we investigate the Index Coding problem.
In wireless networks, each transmitted packet is broadcasted within a certain region and can be overheard by the nearby users. When a user needs to transmit packets, it employs the Index Coding that uses the knowledge of what the user's neighbors have heard previously (side information) in order to reduce the number of transmissions. The objective is to satisfy the demands of all the users with the minimum number of transmissions. With the Index Coding, each transmitted packet can be a combination of the original packets. The Index Coding problem has been proven to be NPhard, and NPhard to approximate.
Noting that the Index Coding problem is not only NPhard but NPhard to approximate, we look at it from a novel perspective and define the Complementary Index Coding problem; where the objective is to maximize the number of transmissions that are saved by employing the Index Coding compared to the solution that does not involve coding. We prove that the Complementary Index Coding problem can be approximated in several cases of practical importance. We investigate both the multiple unicast and multiple multicast scenarios for the Complementary Index Coding problem for computational complexity, and provide polynomial time approximation algorithms.
Dr. M. A. Chaudry
no
59:52
florian@knorn.org (Hamilton Institute)Speaker: Dr. M. A. Chaudry Abstract: Network coding has gained significant interest from the research community since the first paper by Alshwede et al., in 2000. Network coding techniques can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. We focus on network coding for wireless networks; specifically we investigate the Index Coding problem. In wireless networks, each transmitted packet is broadcasted within a certain region and can be overheard by the nearby users. When a user needs to transmit packets, it employs the Index Coding that uses the knowledge of what the user's neighbors have heard previously (side information) in order to reduce the number of transmissions. The objective is to satisfy the demands of all the users with the minimum number of transmissions. With the Index Coding, each transmitted packet can be a combination of the original packets. The Index Coding problem has been proven to be NPhard, and NPhard to approximate. Noting that the Index Coding problem is not only NPhard but NPhard to approximate, we look at it from a novel perspective and define the Complementary Index Coding problem; where the objective is to maximize the number of transmissions that are saved by employing the Index Coding compared to the solution that does not involve coding. We prove that the Complementary Index Coding problem can be approximated in several cases of practical importance. We investigate both the multiple unicast and multiple multicast scenarios for the Complementary Index Coding problem for computational complexity, and provide polynomial time approximation algorithms.Speaker: Dr. M. A. Chaudry Abstract: Network coding has gained significant interest from the research community since the first paper by Alshwede et al., in 2000. Network coding techniques can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. We focus on network coding for wireless networks; specifically we investigate the Index Coding problem. In wireless networks, each transmitted packet is broadcasted within a certain region and can be overheard by the nearby users. When a user needs to transmit packets, it employs the Index Coding that uses the knowledge of what the user's neighbors have heard previously (side information) in order to reduce the number of transmissions. The objective is to satisfy the demands of all the users with the minimum number of transmissions. With the Index Coding, each transmitted packet can be a combination of the original packets. The Index Coding problem has been proven to be NPhard, and NPhard to approximate. Noting that the Index Coding problem is not only NPhard but NPhard to approximate, we look at it from a novel perspective and define the Complementary Index Coding problem; where the objective is to maximize the number of transmissions that are saved by employing the Index Coding compared to the solution that does not involve coding. We prove that the Complementary Index Coding problem can be approximated in several cases of practical importance. We investigate both the multiple unicast and multiple multicast scenarios for the Complementary Index Coding problem for computational complexity, and provide polynomial time approximation algorithms.Seminars,Talks,Presentations,Hamilton,Institute

On Continuous Counting and Learning
in a Distributed System
http://www.hamilton.ie/seminars/videos/53b_radunovic_lo.mp4
http://www.hamilton.ie/seminars/videos/53b_radunovic_lo.mp4
Fri, 03 Aug 2012 00:00:53 +0100
Speaker:
Dr. B. Radunović
Abstract:
Consider a distributed system that consists of a coordinator node connected to multiple sites. Items from a data stream arrive to the system one by one, and are arbitrarily distributed to different sites. The goal of the system is to continuously track a function of the items received so far within a prescribed relative accuracy and at the lowest possible communication cost. This class of problems is called a continual distributed stream monitoring.
In this talk we will focus on two problems from this class. We will first discuss the count tracking problem (counter), which is an important building block for other more complex algorithms. The goal of the counter is to keep a track of the sum of all the items from the stream at all times. We show that for a class of input loads a randomized algorithm guarantees to track the count accurately with high probability and has the expected communication cost that is sublinear in both data size and the number of sites. We also establish matching lower bounds. We then illustrate how our nonmonotonic counter can be applied to solve more complex problems, such as to track the second frequency moment and the Bayesian linear regression of the input stream.
We will next discuss the online nonstochastic experts problem in the continual distributed setting. Here, at each timestep, one of the sites has to pick one expert from the set of experts, and then the same site receives information about payoffs of all experts for that round. The goal of the distributed system is to minimize regret with respect to the optimal choice in hindsight, while simultaneously keeping communication to the minimum. This problem is well understood in the centralized setting, but the communication tradeoff in the distributed setting is unknown. The two extreme solutions to this problem are to communicate with everyone after each payoff, and not to communicate at all. We will discuss how to achieve the tradeoff between these two approaches. We will present an algorithm that achieves a nontrivial tradeoff and show the difficulties of further improving its performance.
Dr. B. Radunović
no
1:05:53
florian@knorn.org (Hamilton Institute)Speaker: Dr. B. Radunović Abstract: Consider a distributed system that consists of a coordinator node connected to multiple sites. Items from a data stream arrive to the system one by one, and are arbitrarily distributed to different sites. The goal of the system is to continuously track a function of the items received so far within a prescribed relative accuracy and at the lowest possible communication cost. This class of problems is called a continual distributed stream monitoring. In this talk we will focus on two problems from this class. We will first discuss the count tracking problem (counter), which is an important building block for other more complex algorithms. The goal of the counter is to keep a track of the sum of all the items from the stream at all times. We show that for a class of input loads a randomized algorithm guarantees to track the count accurately with high probability and has the expected communication cost that is sublinear in both data size and the number of sites. We also establish matching lower bounds. We then illustrate how our nonmonotonic counter can be applied to solve more complex problems, such as to track the second frequency moment and the Bayesian linear regression of the input stream. We will next discuss the online nonstochastic experts problem in the continual distributed setting. Here, at each timestep, one of the sites has to pick one expert from the set of experts, and then the same site receives information about payoffs of all experts for that round. The goal of the distributed system is to minimize regret with respect to the optimal choice in hindsight, while simultaneously keeping communication to the minimum. This problem is well understood in the centralized setting, but the communication tradeoff in the distributed setting is unknown. The two extreme solutions to this problem are to communicate with everyone after each payoff, and not to communicate at all. We will discuss how to achieve the tradeoff between these two approaches. We will present an algorithm that achieves a nontrivial tradeoff and show the difficulties of further improving its performance.Speaker: Dr. B. Radunović Abstract: Consider a distributed system that consists of a coordinator node connected to multiple sites. Items from a data stream arrive to the system one by one, and are arbitrarily distributed to different sites. The goal of the system is to continuously track a function of the items received so far within a prescribed relative accuracy and at the lowest possible communication cost. This class of problems is called a continual distributed stream monitoring. In this talk we will focus on two problems from this class. We will first discuss the count tracking problem (counter), which is an important building block for other more complex algorithms. The goal of the counter is to keep a track of the sum of all the items from the stream at all times. We show that for a class of input loads a randomized algorithm guarantees to track the count accurately with high probability and has the expected communication cost that is sublinear in both data size and the number of sites. We also establish matching lower bounds. We then illustrate how our nonmonotonic counter can be applied to solve more complex problems, such as to track the second frequency moment and the Bayesian linear regression of the input stream. We will next discuss the online nonstochastic experts problem in the continual distributed setting. Here, at each timestep, one of the sites has to pick one expert from the set of experts, and then the same site receives information about payoffs of all experts for that round. The goal of the distributed system is to minimize regret with respect to the optimal choice in hindsight, while simultaneously keeping communication to the minimum. This problem is well understood in the centralized setting, but the communication tradeoff in the distributed setting is unknown. The two extreme solutions to this problem are to communicate with everyone after each payoff, and not to communicate at all. We will discuss how to achieve the tradeoff between these two approaches. We will present an algorithm that achieves a nontrivial tradeoff and show the difficulties of further improving its performance.Seminars,Talks,Presentations,Hamilton,Institute

Multichannel MAC Protocols for Wireless Sensor Networks
http://www.hamilton.ie/seminars/videos/52c_cano_lo.mp4
http://www.hamilton.ie/seminars/videos/52c_cano_lo.mp4
Tue, 31 Jul 2012 00:00:52 +0100
Speaker:
Dr. C. Cano
Abstract:
Wireless Sensor Networks (WSNs) are networks formed by
highly constrained devices that communicate measured environmental
data using lowpower wireless transmissions. The increase of spectrum
utilization in nonlicensed bands along with the reduced power used by
these nodes is expected to cause high interference problems in WSNs.
Therefore, the design of new dynamic spectrum access techniques
specifically tailored to these networks plays an important role for
their future development. In this talk the main challenges of dynamic
spectrum access in WSNs will be described and a first approach to
coordinate sensor nodes will be presented.
Dr. C. Cano
no
40:09
florian@knorn.org (Hamilton Institute)Speaker: Dr. C. Cano Abstract: Wireless Sensor Networks (WSNs) are networks formed by highly constrained devices that communicate measured environmental data using lowpower wireless transmissions. The increase of spectrum utilization in nonlicensed bands along with the reduced power used by these nodes is expected to cause high interference problems in WSNs. Therefore, the design of new dynamic spectrum access techniques specifically tailored to these networks plays an important role for their future development. In this talk the main challenges of dynamic spectrum access in WSNs will be described and a first approach to coordinate sensor nodes will be presented.Speaker: Dr. C. Cano Abstract: Wireless Sensor Networks (WSNs) are networks formed by highly constrained devices that communicate measured environmental data using lowpower wireless transmissions. The increase of spectrum utilization in nonlicensed bands along with the reduced power used by these nodes is expected to cause high interference problems in WSNs. Therefore, the design of new dynamic spectrum access techniques specifically tailored to these networks plays an important role for their future development. In this talk the main challenges of dynamic spectrum access in WSNs will be described and a first approach to coordinate sensor nodes will be presented.Seminars,Talks,Presentations,Hamilton,Institute

Networking Infrastructure and Data Management for CyberPhysical Systems
http://www.hamilton.ie/seminars/videos/51s_han_lo.mp4
http://www.hamilton.ie/seminars/videos/51s_han_lo.mp4
Tue, 10 Jul 2012 00:00:51 +0100
Speaker:
S. Han
Abstract:
A cyberphysical system (CPS) is a system featuring a tight combination of, and coordination between, the system's computational and physical elements. A largescale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and realtime communication is a big challenge.
In this talk, I will first present a TDMAbased lowpower and secure realtime wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable realtime service support. I will describe the network management techniques for ensuring the reliable routing and realtime services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built up a prototype system and deployed it in different environments for performance measurement. I will also present a lightweighted and scalable solution for interconnecting heterogenous CPS subsystems together through a slim IP adaptation layer. This approach makes the underlying connectivity technologies transparent to the application developers thus enables rapid application development and efficient migration among different CPS platforms.
S. Han
no
1:08:32
florian@knorn.org (Hamilton Institute)Speaker: S. Han Abstract: A cyberphysical system (CPS) is a system featuring a tight combination of, and coordination between, the system's computational and physical elements. A largescale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and realtime communication is a big challenge. In this talk, I will first present a TDMAbased lowpower and secure realtime wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable realtime service support. I will describe the network management techniques for ensuring the reliable routing and realtime services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built up a prototype system and deployed it in different environments for performance measurement. I will also present a lightweighted and scalable solution for interconnecting heterogenous CPS subsystems together through a slim IP adaptation layer. This approach makes the underlying connectivity technologies transparent to the application developers thus enables rapid application development and efficient migration among different CPS platforms.Speaker: S. Han Abstract: A cyberphysical system (CPS) is a system featuring a tight combination of, and coordination between, the system's computational and physical elements. A largescale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and realtime communication is a big challenge. In this talk, I will first present a TDMAbased lowpower and secure realtime wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable realtime service support. I will describe the network management techniques for ensuring the reliable routing and realtime services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built up a prototype system and deployed it in different environments for performance measurement. I will also present a lightweighted and scalable solution for interconnecting heterogenous CPS subsystems together through a slim IP adaptation layer. This approach makes the underlying connectivity technologies transparent to the application developers thus enables rapid application development and efficient migration among different CPS platforms.Seminars,Talks,Presentations,Hamilton,Institute

Cracking the Cutoff Window
http://www.hamilton.ie/seminars/videos/50c_lancia_lo.mp4
http://www.hamilton.ie/seminars/videos/50c_lancia_lo.mp4
Mon, 11 Jun 2012 00:00:50 +0100
Speaker:
C. Lancia
Abstract:
The cutoff phenomenon is the abrupt convergence to stationarity of a Markov chain. It is characterized by a narrow window centered around a cutofftime in which the distance from stationarity suddenly drops from 1 to 0.
All the examples in which cutoff was detected clearly indicate that a drift towards the opportune quantiles of the stationary measure could be held responsible for this phenomenon. In the case of birth and death chains this mechanism is fairly well understood.
I will present a possible generalization of this picture to more general systems and show that there are two sources of randomness contributing to the size of the cutoff window. One is related to the drift towards the relevant quantiles of $\pi$ and the other to the thermalization in that region of the state space.
For onedimensional systems a sufficiently strong drift ensures that the thermalization is under control but for higherdimensional models the thermalization contribution can grow wide the cutoff window and even destroy completely the phenomenon.
C. Lancia
no
39:38
florian@knorn.org (Hamilton Institute)Speaker: C. Lancia Abstract: The cutoff phenomenon is the abrupt convergence to stationarity of a Markov chain. It is characterized by a narrow window centered around a cutofftime in which the distance from stationarity suddenly drops from 1 to 0. All the examples in which cutoff was detected clearly indicate that a drift towards the opportune quantiles of the stationary measure could be held responsible for this phenomenon. In the case of birth and death chains this mechanism is fairly well understood. I will present a possible generalization of this picture to more general systems and show that there are two sources of randomness contributing to the size of the cutoff window. One is related to the drift towards the relevant quantiles of $\pi$ and the other to the thermalization in that region of the state space. For onedimensional systems a sufficiently strong drift ensures that the thermalization is under control but for higherdimensional models the thermalization contribution can grow wide the cutoff window and even destroy completely the phenomenon.Speaker: C. Lancia Abstract: The cutoff phenomenon is the abrupt convergence to stationarity of a Markov chain. It is characterized by a narrow window centered around a cutofftime in which the distance from stationarity suddenly drops from 1 to 0. All the examples in which cutoff was detected clearly indicate that a drift towards the opportune quantiles of the stationary measure could be held responsible for this phenomenon. In the case of birth and death chains this mechanism is fairly well understood. I will present a possible generalization of this picture to more general systems and show that there are two sources of randomness contributing to the size of the cutoff window. One is related to the drift towards the relevant quantiles of $\pi$ and the other to the thermalization in that region of the state space. For onedimensional systems a sufficiently strong drift ensures that the thermalization is under control but for higherdimensional models the thermalization contribution can grow wide the cutoff window and even destroy completely the phenomenon.Seminars,Talks,Presentations,Hamilton,Institute

Reaching Consensus about Gossip
http://www.hamilton.ie/seminars/videos/49p_thiran_lo.mp4
http://www.hamilton.ie/seminars/videos/49p_thiran_lo.mp4
Mon, 28 May 2012 00:00:49 +0100
Speaker:
Prof. P. Thiran
Abstract:
An increasingly larger number of applications require networks to perform decentralized computations over distributed data. A representative problem of these “innetwork processing” tasks is the distributed computation of the average of values present at nodes of a network, known as gossip algorithms. They have received recently significant attention across different communities (networking, algorithms, signal processing, control) because they constitute simple and robust methods for distributed information processing over networks.
The first part of the talk is a survey some recent results on realvalued (analog) gossip algorithms. For many topologies that are realistic for wireless sensor networks, the classical nearestneighbor gossip algorithms are slow, but a variation of these algorithms can be proven to order optimal (cost of O(n) messages for a network of n nodes) for some random geometric graphs. A second improvement, inspired by Uniform Gossip, allows to use unidirectional paths to compute the average, instead of requiring to route the average back and forth along the same path (one way paths are better suited in highly dynamic networks).
The second part of the talk is devoted to quantized gossip on arbitrary connected networks. By their nature, quantized algorithms cannot produce a real, analog average, but they can (almost surely) reach consensus on the quantized interval that contains the average, in finite time.
(This is a joint work with Florence Benezit, Martin Vetterli, Alex Dimakis, Vincent Blondel and John Tsitsiklis.)
Prof. P. Thiran
no
1:12:03
florian@knorn.org (Hamilton Institute)Speaker: Prof. P. Thiran Abstract: An increasingly larger number of applications require networks to perform decentralized computations over distributed data. A representative problem of these “innetwork processing” tasks is the distributed computation of the average of values present at nodes of a network, known as gossip algorithms. They have received recently significant attention across different communities (networking, algorithms, signal processing, control) because they constitute simple and robust methods for distributed information processing over networks. The first part of the talk is a survey some recent results on realvalued (analog) gossip algorithms. For many topologies that are realistic for wireless sensor networks, the classical nearestneighbor gossip algorithms are slow, but a variation of these algorithms can be proven to order optimal (cost of O(n) messages for a network of n nodes) for some random geometric graphs. A second improvement, inspired by Uniform Gossip, allows to use unidirectional paths to compute the average, instead of requiring to route the average back and forth along the same path (one way paths are better suited in highly dynamic networks). The second part of the talk is devoted to quantized gossip on arbitrary connected networks. By their nature, quantized algorithms cannot produce a real, analog average, but they can (almost surely) reach consensus on the quantized interval that contains the average, in finite time. (This is a joint work with Florence Benezit, Martin Vetterli, Alex Dimakis, Vincent Blondel and John Tsitsiklis.)Speaker: Prof. P. Thiran Abstract: An increasingly larger number of applications require networks to perform decentralized computations over distributed data. A representative problem of these “innetwork processing” tasks is the distributed computation of the average of values present at nodes of a network, known as gossip algorithms. They have received recently significant attention across different communities (networking, algorithms, signal processing, control) because they constitute simple and robust methods for distributed information processing over networks. The first part of the talk is a survey some recent results on realvalued (analog) gossip algorithms. For many topologies that are realistic for wireless sensor networks, the classical nearestneighbor gossip algorithms are slow, but a variation of these algorithms can be proven to order optimal (cost of O(n) messages for a network of n nodes) for some random geometric graphs. A second improvement, inspired by Uniform Gossip, allows to use unidirectional paths to compute the average, instead of requiring to route the average back and forth along the same path (one way paths are better suited in highly dynamic networks). The second part of the talk is devoted to quantized gossip on arbitrary connected networks. By their nature, quantized algorithms cannot produce a real, analog average, but they can (almost surely) reach consensus on the quantized interval that contains the average, in finite time. (This is a joint work with Florence Benezit, Martin Vetterli, Alex Dimakis, Vincent Blondel and John Tsitsiklis.)Seminars,Talks,Presentations,Hamilton,Institute

The Role of Kemeny's Constant in Properties of Markov Chains
http://www.hamilton.ie/seminars/videos/48j_hunter_lo.mp4
http://www.hamilton.ie/seminars/videos/48j_hunter_lo.mp4
Wed, 09 May 2012 00:00:48 +0100
Speaker:
Prof. J. J. Hunter
Abstract:
In a finite mstate irreducible Markov chain with stationary probabilities {\pi_i} and mean first passage times m_{ij} (mean recurrence time when i=j) it was first shown, by Kemeny and Snell, that \sum_{j=1}^{m}\pi_jm_{ij} is a constant, K, not depending on i. This constant has since become known as Kemeny’s constant. We consider a variety of techniques for finding expressions for K, derive some bounds for K, and explore various applications and interpretations of theseresults. Interpretations include the expected number of links that a surfer on the World Wide Web located on a random page needs to follow before reaching a desired location, as well as the expected time to mixing in a Markov chain. Various applications have been considered including some perturbation results, mixing on directed graphs and its relation to the Kirchhoff index of regular graphs.
Prof. J. J. Hunter
no
52:12
florian@knorn.org (Hamilton Institute)Speaker: Prof. J. J. Hunter Abstract: In a finite mstate irreducible Markov chain with stationary probabilities {\pi_i} and mean first passage times m_{ij} (mean recurrence time when i=j) it was first shown, by Kemeny and Snell, that \sum_{j=1}^{m}\pi_jm_{ij} is a constant, K, not depending on i. This constant has since become known as Kemeny’s constant. We consider a variety of techniques for finding expressions for K, derive some bounds for K, and explore various applications and interpretations of theseresults. Interpretations include the expected number of links that a surfer on the World Wide Web located on a random page needs to follow before reaching a desired location, as well as the expected time to mixing in a Markov chain. Various applications have been considered including some perturbation results, mixing on directed graphs and its relation to the Kirchhoff index of regular graphs.Speaker: Prof. J. J. Hunter Abstract: In a finite mstate irreducible Markov chain with stationary probabilities {\pi_i} and mean first passage times m_{ij} (mean recurrence time when i=j) it was first shown, by Kemeny and Snell, that \sum_{j=1}^{m}\pi_jm_{ij} is a constant, K, not depending on i. This constant has since become known as Kemeny’s constant. We consider a variety of techniques for finding expressions for K, derive some bounds for K, and explore various applications and interpretations of theseresults. Interpretations include the expected number of links that a surfer on the World Wide Web located on a random page needs to follow before reaching a desired location, as well as the expected time to mixing in a Markov chain. Various applications have been considered including some perturbation results, mixing on directed graphs and its relation to the Kirchhoff index of regular graphs.Seminars,Talks,Presentations,Hamilton,Institute

Experiences in Industrial Mathematics in Ireland
http://www.hamilton.ie/seminars/videos/47s_obrien_lo.mp4
http://www.hamilton.ie/seminars/videos/47s_obrien_lo.mp4
Mon, 23 Apr 2012 00:00:47 +0100
Speaker:
Prof. S. O'Brien
Abstract:
In the context of the Macsi industrial mathematics group, we look at the types of problems which have arisen from industrial collaboration and examine a couple of these in detail.
In particular, we look at a mathematical model for etching glass with acids which arose from a study group with industry problem presented by Waterford Crystal.
Prof. S. O'Brien
no
56:25
florian@knorn.org (Hamilton Institute)Speaker: Prof. S. O'Brien Abstract: In the context of the Macsi industrial mathematics group, we look at the types of problems which have arisen from industrial collaboration and examine a couple of these in detail. In particular, we look at a mathematical model for etching glass with acids which arose from a study group with industry problem presented by Waterford Crystal.Speaker: Prof. S. O'Brien Abstract: In the context of the Macsi industrial mathematics group, we look at the types of problems which have arisen from industrial collaboration and examine a couple of these in detail. In particular, we look at a mathematical model for etching glass with acids which arose from a study group with industry problem presented by Waterford Crystal.Seminars,Talks,Presentations,Hamilton,Institute

Geographically weighted regression: modelling spatial heterogeneity
http://www.hamilton.ie/seminars/videos/46m_charlton_lo.mp4
http://www.hamilton.ie/seminars/videos/46m_charlton_lo.mp4
Wed, 21 Mar 2012 00:00:46 +0000
Speaker:
Martin Charlton
Abstract:
Geographically Weighted Regression is a technique for exploratory spatial data analysis. In "normal" regression with data for spatial objects we assume that the relationship we are modelling is uniform across the study area  that is, the estimated regression parameters are "wholemap" statistics. In many situations this is not necessarily the case, as mapping the residuals (the differences between the observed and predicted data) may reveal. Many different solutions have been proposed for dealing with spatial variation in these relationships. GWR provides means of modelling such relationships.
This seminar outlines the characteristics of spatial data and the challenges its use poses for analysis, the ideas underpinning geographically weighted regression and details the process of estimating and interpreting the outputs from GWR models. We finish with a brief survey of current issues in GWR and potential future developments.
Martin Charlton
no
1:05:02
florian@knorn.org (Hamilton Institute)Speaker: Martin Charlton Abstract: Geographically Weighted Regression is a technique for exploratory spatial data analysis. In "normal" regression with data for spatial objects we assume that the relationship we are modelling is uniform across the study area  that is, the estimated regression parameters are "wholemap" statistics. In many situations this is not necessarily the case, as mapping the residuals (the differences between the observed and predicted data) may reveal. Many different solutions have been proposed for dealing with spatial variation in these relationships. GWR provides means of modelling such relationships. This seminar outlines the characteristics of spatial data and the challenges its use poses for analysis, the ideas underpinning geographically weighted regression and details the process of estimating and interpreting the outputs from GWR models. We finish with a brief survey of current issues in GWR and potential future developments.Speaker: Martin Charlton Abstract: Geographically Weighted Regression is a technique for exploratory spatial data analysis. In "normal" regression with data for spatial objects we assume that the relationship we are modelling is uniform across the study area  that is, the estimated regression parameters are "wholemap" statistics. In many situations this is not necessarily the case, as mapping the residuals (the differences between the observed and predicted data) may reveal. Many different solutions have been proposed for dealing with spatial variation in these relationships. GWR provides means of modelling such relationships. This seminar outlines the characteristics of spatial data and the challenges its use poses for analysis, the ideas underpinning geographically weighted regression and details the process of estimating and interpreting the outputs from GWR models. We finish with a brief survey of current issues in GWR and potential future developments.Seminars,Talks,Presentations,Hamilton,Institute

Cascade Dynamics on Complex Networks
http://www.hamilton.ie/seminars/videos/45a_hackett_lo.mp4
http://www.hamilton.ie/seminars/videos/45a_hackett_lo.mp4
Wed, 14 Mar 2012 00:00:45 +0000
Speaker:
Dr. A. Hackett
Abstract:
A cascade or avalanche is observed when interactions between the components of a system allow an initially localized effect to propagate globally. For example, the malfunction of technological systems like email networks or electrical power grids is often attributable to a cascade of failures triggered by some isolated event. Similarly, the transmission of infectious diseases and the adoption of innovations or cultural fads may induce cascades among people in society. It has been extensively demonstrated that such dynamics depend sensitively on the patterns of interaction laid out in the underlying network of the system. One of the primary goals of the study of complex networks is to provide a sound theoretical basis for this dependence.
In this seminar we discuss some recent progress in modelling the interaction between network structure and dynamics. Focusing on the phenomenon of high clustering, we present two recently proposed classes of random graphs that achieve non zero clustering coefficients. We provide an analytically tractable framework for modeling cascades in both of these classes. This framework is then used to calculate the mean cascade size and the cascade threshold for a broad class of binarystate dynamics.
Dr. A. Hackett
no
1:10:26
florian@knorn.org (Hamilton Institute)Speaker: Dr. A. Hackett Abstract: A cascade or avalanche is observed when interactions between the components of a system allow an initially localized effect to propagate globally. For example, the malfunction of technological systems like email networks or electrical power grids is often attributable to a cascade of failures triggered by some isolated event. Similarly, the transmission of infectious diseases and the adoption of innovations or cultural fads may induce cascades among people in society. It has been extensively demonstrated that such dynamics depend sensitively on the patterns of interaction laid out in the underlying network of the system. One of the primary goals of the study of complex networks is to provide a sound theoretical basis for this dependence. In this seminar we discuss some recent progress in modelling the interaction between network structure and dynamics. Focusing on the phenomenon of high clustering, we present two recently proposed classes of random graphs that achieve non zero clustering coefficients. We provide an analytically tractable framework for modeling cascades in both of these classes. This framework is then used to calculate the mean cascade size and the cascade threshold for a broad class of binarystate dynamics.Speaker: Dr. A. Hackett Abstract: A cascade or avalanche is observed when interactions between the components of a system allow an initially localized effect to propagate globally. For example, the malfunction of technological systems like email networks or electrical power grids is often attributable to a cascade of failures triggered by some isolated event. Similarly, the transmission of infectious diseases and the adoption of innovations or cultural fads may induce cascades among people in society. It has been extensively demonstrated that such dynamics depend sensitively on the patterns of interaction laid out in the underlying network of the system. One of the primary goals of the study of complex networks is to provide a sound theoretical basis for this dependence. In this seminar we discuss some recent progress in modelling the interaction between network structure and dynamics. Focusing on the phenomenon of high clustering, we present two recently proposed classes of random graphs that achieve non zero clustering coefficients. We provide an analytically tractable framework for modeling cascades in both of these classes. This framework is then used to calculate the mean cascade size and the cascade threshold for a broad class of binarystate dynamics.Seminars,Talks,Presentations,Hamilton,Institute

Exploit prediction to handle mobility in wireless ad hoc networks
http://www.hamilton.ie/seminars/videos/44x_li_lo.mp4
http://www.hamilton.ie/seminars/videos/44x_li_lo.mp4
Thu, 01 Mar 2012 00:00:44 +0000
Speaker:
Dr. X. Li
Abstract:
Node mobility is often a hindering factor of the networking process in wireless ad hoc networks. In this talk, we will introduce our two recent works that address this problem through a prediction approach.
The first work proposes an AutoRegressive Hello protocol (ARH) for mobile ad hoc networks. A hello protocol is a basic tool for neighborhood discovery. It requires nodes to claim their existence/aliveness by periodic ‘hello’ messages. ARH evolves along with network dynamics by predicting node mobility, and seamlessly tunes itself to obtain ‘hello’ frequency using local knowledge only.
The second work proposes a distributed Predictionbased Secure and Reliable routing framework (PSR) for wireless body area networks. In this protocol, each node predicts the quality of every incidental link and any change in the neighbor set too, based on an autoregressive model. According to the prediction result, it selects routing next hope and decides whether to enables/disables source authentication.
Dr. X. Li
no
48:49
florian@knorn.org (Hamilton Institute)Speaker: Dr. X. Li Abstract: Node mobility is often a hindering factor of the networking process in wireless ad hoc networks. In this talk, we will introduce our two recent works that address this problem through a prediction approach. The first work proposes an AutoRegressive Hello protocol (ARH) for mobile ad hoc networks. A hello protocol is a basic tool for neighborhood discovery. It requires nodes to claim their existence/aliveness by periodic ‘hello’ messages. ARH evolves along with network dynamics by predicting node mobility, and seamlessly tunes itself to obtain ‘hello’ frequency using local knowledge only. The second work proposes a distributed Predictionbased Secure and Reliable routing framework (PSR) for wireless body area networks. In this protocol, each node predicts the quality of every incidental link and any change in the neighbor set too, based on an autoregressive model. According to the prediction result, it selects routing next hope and decides whether to enables/disables source authentication.Speaker: Dr. X. Li Abstract: Node mobility is often a hindering factor of the networking process in wireless ad hoc networks. In this talk, we will introduce our two recent works that address this problem through a prediction approach. The first work proposes an AutoRegressive Hello protocol (ARH) for mobile ad hoc networks. A hello protocol is a basic tool for neighborhood discovery. It requires nodes to claim their existence/aliveness by periodic ‘hello’ messages. ARH evolves along with network dynamics by predicting node mobility, and seamlessly tunes itself to obtain ‘hello’ frequency using local knowledge only. The second work proposes a distributed Predictionbased Secure and Reliable routing framework (PSR) for wireless body area networks. In this protocol, each node predicts the quality of every incidental link and any change in the neighbor set too, based on an autoregressive model. According to the prediction result, it selects routing next hope and decides whether to enables/disables source authentication.Seminars,Talks,Presentations,Hamilton,Institute

Juggler's Exclusion Process
http://www.hamilton.ie/seminars/videos/43l_leskela_lo.mp4
http://www.hamilton.ie/seminars/videos/43l_leskela_lo.mp4
Wed, 01 Feb 2012 00:00:43 +0000
Speaker:
Prof. L. Leskelä
Abstract:
Juggler's exclusion process describes a system of particles on the positive integers where particles drift down to zero at unit speed. After a particle hits zero, it jumps into a randomly chosen unoccupied site. I will model the system as a setvalued Markov process and show that the process is ergodic if the family of jump height distributions is uniformly integrable. In a special case where the particles perform jumps according to an entropymaximizing fashion, the process reaches its equilibrium in finite nonrandom time, and the equilibrium distribution can be represented as a Gibbs measure conforming to a linear gravitational potential. Time permitting, I will also discuss a recent result which sharply characterizes uniform integrability using the theory of stochastic orders, and allows to interpret the dominating function in Lebesgue's dominated convergence theorem in a natural probabilistic way.
This talk is based on joint work with Harri Varpanen (Aalto University, Finland) and Matti Vihola (University of Jyväskylä, Finland).
Prof. L. Leskelä
no
52:17
florian@knorn.org (Hamilton Institute)Speaker: Prof. L. Leskelä Abstract: Juggler's exclusion process describes a system of particles on the positive integers where particles drift down to zero at unit speed. After a particle hits zero, it jumps into a randomly chosen unoccupied site. I will model the system as a setvalued Markov process and show that the process is ergodic if the family of jump height distributions is uniformly integrable. In a special case where the particles perform jumps according to an entropymaximizing fashion, the process reaches its equilibrium in finite nonrandom time, and the equilibrium distribution can be represented as a Gibbs measure conforming to a linear gravitational potential. Time permitting, I will also discuss a recent result which sharply characterizes uniform integrability using the theory of stochastic orders, and allows to interpret the dominating function in Lebesgue's dominated convergence theorem in a natural probabilistic way. This talk is based on joint work with Harri Varpanen (Aalto University, Finland) and Matti Vihola (University of Jyväskylä, Finland).Speaker: Prof. L. Leskelä Abstract: Juggler's exclusion process describes a system of particles on the positive integers where particles drift down to zero at unit speed. After a particle hits zero, it jumps into a randomly chosen unoccupied site. I will model the system as a setvalued Markov process and show that the process is ergodic if the family of jump height distributions is uniformly integrable. In a special case where the particles perform jumps according to an entropymaximizing fashion, the process reaches its equilibrium in finite nonrandom time, and the equilibrium distribution can be represented as a Gibbs measure conforming to a linear gravitational potential. Time permitting, I will also discuss a recent result which sharply characterizes uniform integrability using the theory of stochastic orders, and allows to interpret the dominating function in Lebesgue's dominated convergence theorem in a natural probabilistic way. This talk is based on joint work with Harri Varpanen (Aalto University, Finland) and Matti Vihola (University of Jyväskylä, Finland).Seminars,Talks,Presentations,Hamilton,Institute

Exploratory analysis of human mobility and activities from georeferenced communication data streams
http://www.hamilton.ie/seminars/videos/42a_pozdnoukhov_lo.mp4
http://www.hamilton.ie/seminars/videos/42a_pozdnoukhov_lo.mp4
Thu, 19 Jan 2012 00:00:42 +0000
Speaker:
Dr. A. Pozdnoukhov
Abstract:
Communication technologies with their very high penetration into society can serve as particularly rich source of information to explore and model evolution of complex social systems.
This talk presents a framework of methods useful for exploratory analysis, modelling and visualization of data streams available from Twitter, instant messenger services and mobile phone communication logs. We apply probabilistic topic models to uncover the temporal evolution and spatial variability of population’s response to various stimuli such as large scale sportive, political or cultural events. We demonstrate how untypical activity levels can be identified by fitting a nonhomogeneous Markovmodulated Poisson processes and exploring spatial variability of the component corresponding to unusual bursts/lulls of human activities.
Finally, we present initial ideas on the combined use of available data sources and models within a joint largescale geocomputation framework to uncover a complex interplay of mobility and communication patterns.
Dr. A. Pozdnoukhov
no
46:47
florian@knorn.org (Hamilton Institute)Speaker: Dr. A. Pozdnoukhov Abstract: Communication technologies with their very high penetration into society can serve as particularly rich source of information to explore and model evolution of complex social systems. This talk presents a framework of methods useful for exploratory analysis, modelling and visualization of data streams available from Twitter, instant messenger services and mobile phone communication logs. We apply probabilistic topic models to uncover the temporal evolution and spatial variability of population’s response to various stimuli such as large scale sportive, political or cultural events. We demonstrate how untypical activity levels can be identified by fitting a nonhomogeneous Markovmodulated Poisson processes and exploring spatial variability of the component corresponding to unusual bursts/lulls of human activities. Finally, we present initial ideas on the combined use of available data sources and models within a joint largescale geocomputation framework to uncover a complex interplay of mobility and communication patterns.Speaker: Dr. A. Pozdnoukhov Abstract: Communication technologies with their very high penetration into society can serve as particularly rich source of information to explore and model evolution of complex social systems. This talk presents a framework of methods useful for exploratory analysis, modelling and visualization of data streams available from Twitter, instant messenger services and mobile phone communication logs. We apply probabilistic topic models to uncover the temporal evolution and spatial variability of population’s response to various stimuli such as large scale sportive, political or cultural events. We demonstrate how untypical activity levels can be identified by fitting a nonhomogeneous Markovmodulated Poisson processes and exploring spatial variability of the component corresponding to unusual bursts/lulls of human activities. Finally, we present initial ideas on the combined use of available data sources and models within a joint largescale geocomputation framework to uncover a complex interplay of mobility and communication patterns.Seminars,Talks,Presentations,Hamilton,Institute

Diagonal Stability and Completely Positive Matrices
http://www.hamilton.ie/seminars/videos/41a_berman_lo.mp4
http://www.hamilton.ie/seminars/videos/41a_berman_lo.mp4
Mon, 17 Oct 2011 00:00:41 +0100
Speaker:
Prof. A. Berman
Abstract:
In this paper a general notion of common diagonal Lyapunov matrix is formulated for a collection of n×n matrices A_1,...,A_s and polyhedral cones k_1,...,k_s in R^n. Necessary and sufficient conditions are derived for the existence of a common diagonal Lyapunov matrix in this setting.
This talk is based on joint work with Christopher King & Robert Shorten.
Prof. A. Berman
no
39:33
florian@knorn.org (Hamilton Institute)Speaker: Prof. A. Berman Abstract: In this paper a general notion of common diagonal Lyapunov matrix is formulated for a collection of n×n matrices A_1,...,A_s and polyhedral cones k_1,...,k_s in R^n. Necessary and sufficient conditions are derived for the existence of a common diagonal Lyapunov matrix in this setting. This talk is based on joint work with Christopher King & Robert Shorten.Speaker: Prof. A. Berman Abstract: In this paper a general notion of common diagonal Lyapunov matrix is formulated for a collection of n×n matrices A_1,...,A_s and polyhedral cones k_1,...,k_s in R^n. Necessary and sufficient conditions are derived for the existence of a common diagonal Lyapunov matrix in this setting. This talk is based on joint work with Christopher King & Robert Shorten.Seminars,Talks,Presentations,Hamilton,Institute

Load balancing for Markov chains
http://www.hamilton.ie/seminars/videos/40s_kirkland_lo.mp4
http://www.hamilton.ie/seminars/videos/40s_kirkland_lo.mp4
Mon, 17 Oct 2011 00:00:40 +0100
Speaker:
Prof. S. Kirkland
Abstract:
A square matrix T is called stochastic if its entries are nonnegative and its row sums are all equal to one. Stochastic matrices are the centrepiece of the theory of discretetime, time homogenous Markov chains on a finite state space. If some power of the stochastic matrix T has all positive entries, then there is a unique left eigenvector for T, known as the stationary distribution, to which the iterates of the Markov chain converge, regardless of what the initial distribution for the chain is. Thus, in this setting, the stationary distribution can be thought of as giving the probability that the chain is in a particular state over the long run.
In many applications, the stochastic matrix under consideration is equipped with an underlying combinatorial structure, which can be recorded in a directed graph. Given a stochastic matrix T, how are the entries in the stationary distribution influenced by the structure of the directed graph associated with T? In this talk we investigate a question of that type by finding the minimum value of the maximum entry in the stationary distribution for T, as T ranges over the set of stochastic matrices with a given directed graph. The solution involves techniques from matrix theory, graph theory, and nonlinear programming.
Prof. S. Kirkland
no
39:18
florian@knorn.org (Hamilton Institute)Speaker: Prof. S. Kirkland Abstract: A square matrix T is called stochastic if its entries are nonnegative and its row sums are all equal to one. Stochastic matrices are the centrepiece of the theory of discretetime, time homogenous Markov chains on a finite state space. If some power of the stochastic matrix T has all positive entries, then there is a unique left eigenvector for T, known as the stationary distribution, to which the iterates of the Markov chain converge, regardless of what the initial distribution for the chain is. Thus, in this setting, the stationary distribution can be thought of as giving the probability that the chain is in a particular state over the long run. In many applications, the stochastic matrix under consideration is equipped with an underlying combinatorial structure, which can be recorded in a directed graph. Given a stochastic matrix T, how are the entries in the stationary distribution influenced by the structure of the directed graph associated with T? In this talk we investigate a question of that type by finding the minimum value of the maximum entry in the stationary distribution for T, as T ranges over the set of stochastic matrices with a given directed graph. The solution involves techniques from matrix theory, graph theory, and nonlinear programming.Speaker: Prof. S. Kirkland Abstract: A square matrix T is called stochastic if its entries are nonnegative and its row sums are all equal to one. Stochastic matrices are the centrepiece of the theory of discretetime, time homogenous Markov chains on a finite state space. If some power of the stochastic matrix T has all positive entries, then there is a unique left eigenvector for T, known as the stationary distribution, to which the iterates of the Markov chain converge, regardless of what the initial distribution for the chain is. Thus, in this setting, the stationary distribution can be thought of as giving the probability that the chain is in a particular state over the long run. In many applications, the stochastic matrix under consideration is equipped with an underlying combinatorial structure, which can be recorded in a directed graph. Given a stochastic matrix T, how are the entries in the stationary distribution influenced by the structure of the directed graph associated with T? In this talk we investigate a question of that type by finding the minimum value of the maximum entry in the stationary distribution for T, as T ranges over the set of stochastic matrices with a given directed graph. The solution involves techniques from matrix theory, graph theory, and nonlinear programming.Seminars,Talks,Presentations,Hamilton,Institute

The Symmetric Nonnegative Inverse Eigenvalue Problem
http://www.hamilton.ie/seminars/videos/39h_smigoc_lo.mp4
http://www.hamilton.ie/seminars/videos/39h_smigoc_lo.mp4
Mon, 17 Oct 2011 00:00:39 +0100
Speaker:
Dr. H. Šmigoc
Abstract:
The question of which lists of complex numbers are the spectra of nonnegative matrices, is known as the nonnegative inverse eigenvalue problem, and the same question posed for symmetric nonnegative matrices is called the symmetric nonnegative inverse eigenvalue problem. In the talk we will present an overview of some recent results on the symmetric nonnegative inverse eigenvalue problem.
Joint work with T. J. Laffey.
Dr. H. Šmigoc
no
31:54
florian@knorn.org (Hamilton Institute)Speaker: Dr. H. Šmigoc Abstract: The question of which lists of complex numbers are the spectra of nonnegative matrices, is known as the nonnegative inverse eigenvalue problem, and the same question posed for symmetric nonnegative matrices is called the symmetric nonnegative inverse eigenvalue problem. In the talk we will present an overview of some recent results on the symmetric nonnegative inverse eigenvalue problem. Joint work with T. J. Laffey.Speaker: Dr. H. Šmigoc Abstract: The question of which lists of complex numbers are the spectra of nonnegative matrices, is known as the nonnegative inverse eigenvalue problem, and the same question posed for symmetric nonnegative matrices is called the symmetric nonnegative inverse eigenvalue problem. In the talk we will present an overview of some recent results on the symmetric nonnegative inverse eigenvalue problem. Joint work with T. J. Laffey.Seminars,Talks,Presentations,Hamilton,Institute

On the Block Numerical Range of Operators in Banach Spaces
http://www.hamilton.ie/seminars/videos/38k_foerster_lo.mp4
http://www.hamilton.ie/seminars/videos/38k_foerster_lo.mp4
Mon, 17 Oct 2011 00:00:38 +0100
Speaker:
Prof. K.H. Förster
Abstract:
In this talk following topics will be discussed:
 The Numerical Range of Operators in Banach Spaces.
 The Block Numerical Range of Operators.
 The Block Numerical Range of Operator Functions.
 The Block Numerical Range of mmonic PerronFrobeniusMatrixPolynomials.
Prof. K.H. Förster
no
37:52
florian@knorn.org (Hamilton Institute)Speaker: Prof. K.H. Förster Abstract: In this talk following topics will be discussed:  The Numerical Range of Operators in Banach Spaces.  The Block Numerical Range of Operators.  The Block Numerical Range of Operator Functions.  The Block Numerical Range of mmonic PerronFrobeniusMatrixPolynomials.Speaker: Prof. K.H. Förster Abstract: In this talk following topics will be discussed:  The Numerical Range of Operators in Banach Spaces.  The Block Numerical Range of Operators.  The Block Numerical Range of Operator Functions.  The Block Numerical Range of mmonic PerronFrobeniusMatrixPolynomials.Seminars,Talks,Presentations,Hamilton,Institute

Essentially Negative News About Positive Systems
http://www.hamilton.ie/seminars/videos/37p_colaneri_lo.mp4
http://www.hamilton.ie/seminars/videos/37p_colaneri_lo.mp4
Mon, 17 Oct 2011 00:00:37 +0100
Speaker:
Prof. P. Colaneri
Abstract:
In this paper the discretisation of switched and nonswitched linear positive systems using Padé approximations is considered. Padé approximations to the matrix exponential are sometimes used by control engineers for discretising continuous time systems and for control system design. We observe that this method of approximation is not suited for the discretisation of positive dynamic systems, for two key reasons. First, certain types of Lyapunov stability are not, in general, preserved. Secondly, and more seriously, positivity need not be preserved, even when stability is. Finally we present an alternative approximation to the matrix exponential which preserves positivity, and linear and quadratic stability.
This talk is based on joint work with Steve Kirkland, Annalisa Zappavigna & Robert Shorten
Prof. P. Colaneri
no
46:25
florian@knorn.org (Hamilton Institute)Speaker: Prof. P. Colaneri Abstract: In this paper the discretisation of switched and nonswitched linear positive systems using Padé approximations is considered. Padé approximations to the matrix exponential are sometimes used by control engineers for discretising continuous time systems and for control system design. We observe that this method of approximation is not suited for the discretisation of positive dynamic systems, for two key reasons. First, certain types of Lyapunov stability are not, in general, preserved. Secondly, and more seriously, positivity need not be preserved, even when stability is. Finally we present an alternative approximation to the matrix exponential which preserves positivity, and linear and quadratic stability. This talk is based on joint work with Steve Kirkland, Annalisa Zappavigna & Robert ShortenSpeaker: Prof. P. Colaneri Abstract: In this paper the discretisation of switched and nonswitched linear positive systems using Padé approximations is considered. Padé approximations to the matrix exponential are sometimes used by control engineers for discretising continuous time systems and for control system design. We observe that this method of approximation is not suited for the discretisation of positive dynamic systems, for two key reasons. First, certain types of Lyapunov stability are not, in general, preserved. Secondly, and more seriously, positivity need not be preserved, even when stability is. Finally we present an alternative approximation to the matrix exponential which preserves positivity, and linear and quadratic stability. This talk is based on joint work with Steve Kirkland, Annalisa Zappavigna & Robert ShortenSeminars,Talks,Presentations,Hamilton,Institute

Some relationships between formal power series and nonnegative matrices
http://www.hamilton.ie/seminars/videos/36t_laffey_lo.mp4
http://www.hamilton.ie/seminars/videos/36t_laffey_lo.mp4
Mon, 17 Oct 2011 00:00:36 +0100
Speaker:
Prof. T. Laffey
Abstract:
Let σ = (λ_1,...,λ_n) be a list of complex numbers which we aim to realize constructively as the spectrum of a nonnegative matrix. Most constructions available in the literature rely on building matrices related to companion matrices from the polynomial f(x) = (xλ_1)...(xλ_n). Kim, Ormes and Roush (JAMS 2000) showed how certain formal power series related to f(x), which have all coefficients, other than the leading one, negative, can be used in finding constructions over the semiring of polynomials with nonnegative coefficients, while, in joint work, Šmigoc and this author (ELA 17 (2008) 333342, LAMA 58 (2010), 10531059) have used polynomials having all their nonleading coefficients negative, to find realizations when σ has not more than two entries with positive real parts. Beginning with the observation that if λ_1,...,λ_n are all positive, then the Taylor expansion of the nth root of F(t) = (1λ_1t)...(1λ_nt) about t=0 has all its nonleading coefficients negative, we present a number of results on the negativity of the coefficients of power series and their applications to nonnegative matrices.
Prof. T. Laffey
no
44:29
florian@knorn.org (Hamilton Institute)Speaker: Prof. T. Laffey Abstract: Let σ = (λ_1,...,λ_n) be a list of complex numbers which we aim to realize constructively as the spectrum of a nonnegative matrix. Most constructions available in the literature rely on building matrices related to companion matrices from the polynomial f(x) = (xλ_1)...(xλ_n). Kim, Ormes and Roush (JAMS 2000) showed how certain formal power series related to f(x), which have all coefficients, other than the leading one, negative, can be used in finding constructions over the semiring of polynomials with nonnegative coefficients, while, in joint work, Šmigoc and this author (ELA 17 (2008) 333342, LAMA 58 (2010), 10531059) have used polynomials having all their nonleading coefficients negative, to find realizations when σ has not more than two entries with positive real parts. Beginning with the observation that if λ_1,...,λ_n are all positive, then the Taylor expansion of the nth root of F(t) = (1λ_1t)...(1λ_nt) about t=0 has all its nonleading coefficients negative, we present a number of results on the negativity of the coefficients of power series and their applications to nonnegative matrices.Speaker: Prof. T. Laffey Abstract: Let σ = (λ_1,...,λ_n) be a list of complex numbers which we aim to realize constructively as the spectrum of a nonnegative matrix. Most constructions available in the literature rely on building matrices related to companion matrices from the polynomial f(x) = (xλ_1)...(xλ_n). Kim, Ormes and Roush (JAMS 2000) showed how certain formal power series related to f(x), which have all coefficients, other than the leading one, negative, can be used in finding constructions over the semiring of polynomials with nonnegative coefficients, while, in joint work, Šmigoc and this author (ELA 17 (2008) 333342, LAMA 58 (2010), 10531059) have used polynomials having all their nonleading coefficients negative, to find realizations when σ has not more than two entries with positive real parts. Beginning with the observation that if λ_1,...,λ_n are all positive, then the Taylor expansion of the nth root of F(t) = (1λ_1t)...(1λ_nt) about t=0 has all its nonleading coefficients negative, we present a number of results on the negativity of the coefficients of power series and their applications to nonnegative matrices.Seminars,Talks,Presentations,Hamilton,Institute

Maximal exponents of polyhedral cones
http://www.hamilton.ie/seminars/videos/35r_loewy_lo.mp4
http://www.hamilton.ie/seminars/videos/35r_loewy_lo.mp4
Mon, 17 Oct 2011 00:00:35 +0100
Speaker:
Prof. R. Loewy
Abstract:
Let K be a proper (i.e., closed, pointed, full and convex) cone in R^n. We consider A∈R^(n×n) which is Kprimitive, that is, there exists a positive integer l such that A^l.x ∈ int K for every 0≠x∈K. The smallest such l is called the exponent of A, denoted by γ(A).
For a polyhedral cone K, the maximum value of γ(A), taken over all Kprimitive matrices A, is denoted by γ(K). Our main result is that for any positive integers m,n, 3 ≤ n ≤ m, the maximum value of γ(K), as K runs through all ndimensional polyhedral cones with m extreme rays, equals
( n  1 )( m  1 ) + ½( 1 + (1)^{(n1)m} ).
We will consider various uniqueness issues related to the main result as well as its connections to known results.
This talk is based on a joint work with Micha Perles and BitShun Tam.
Prof. R. Loewy
no
48:32
florian@knorn.org (Hamilton Institute)Speaker: Prof. R. Loewy Abstract: Let K be a proper (i.e., closed, pointed, full and convex) cone in R^n. We consider A∈R^(n×n) which is Kprimitive, that is, there exists a positive integer l such that A^l.x ∈ int K for every 0≠x∈K. The smallest such l is called the exponent of A, denoted by γ(A). For a polyhedral cone K, the maximum value of γ(A), taken over all Kprimitive matrices A, is denoted by γ(K). Our main result is that for any positive integers m,n, 3 ≤ n ≤ m, the maximum value of γ(K), as K runs through all ndimensional polyhedral cones with m extreme rays, equals ( n  1 )( m  1 ) + ½( 1 + (1)^{(n1)m} ). We will consider various uniqueness issues related to the main result as well as its connections to known results. This talk is based on a joint work with Micha Perles and BitShun Tam.Speaker: Prof. R. Loewy Abstract: Let K be a proper (i.e., closed, pointed, full and convex) cone in R^n. We consider A∈R^(n×n) which is Kprimitive, that is, there exists a positive integer l such that A^l.x ∈ int K for every 0≠x∈K. The smallest such l is called the exponent of A, denoted by γ(A). For a polyhedral cone K, the maximum value of γ(A), taken over all Kprimitive matrices A, is denoted by γ(K). Our main result is that for any positive integers m,n, 3 ≤ n ≤ m, the maximum value of γ(K), as K runs through all ndimensional polyhedral cones with m extreme rays, equals ( n  1 )( m  1 ) + ½( 1 + (1)^{(n1)m} ). We will consider various uniqueness issues related to the main result as well as its connections to known results. This talk is based on a joint work with Micha Perles and BitShun Tam.Seminars,Talks,Presentations,Hamilton,Institute

From nonnegative matrices to nonnegative tensors
http://www.hamilton.ie/seminars/videos/34s_friedland_lo.mp4
http://www.hamilton.ie/seminars/videos/34s_friedland_lo.mp4
Mon, 17 Oct 2011 00:00:34 +0100
Speaker:
Prof. S. Friedland
Abstract:
In this talk we will discuss a number of generalizations of results on nonnegative matrices to nonnegative tensors as: irreducibility and weak irreducibility, PerronFrobenius theorem, CollatzWielandt characterization, Kingman's inequality, KarlinOst and Friedland theorems, tropical spectral radius, diagonal scaling, FriedlandKarlin inequality, nonnegative multilinear forms.
Prof. S. Friedland
no
43:56
florian@knorn.org (Hamilton Institute)Speaker: Prof. S. Friedland Abstract: In this talk we will discuss a number of generalizations of results on nonnegative matrices to nonnegative tensors as: irreducibility and weak irreducibility, PerronFrobenius theorem, CollatzWielandt characterization, Kingman's inequality, KarlinOst and Friedland theorems, tropical spectral radius, diagonal scaling, FriedlandKarlin inequality, nonnegative multilinear forms.Speaker: Prof. S. Friedland Abstract: In this talk we will discuss a number of generalizations of results on nonnegative matrices to nonnegative tensors as: irreducibility and weak irreducibility, PerronFrobenius theorem, CollatzWielandt characterization, Kingman's inequality, KarlinOst and Friedland theorems, tropical spectral radius, diagonal scaling, FriedlandKarlin inequality, nonnegative multilinear forms.Seminars,Talks,Presentations,Hamilton,Institute

Fundamental delay bounds in peertopeer chunkbased realtime streaming systems
http://www.hamilton.ie/seminars/videos/33g_bianchi_lo.mp4
http://www.hamilton.ie/seminars/videos/33g_bianchi_lo.mp4
Thu, 11 Aug 2011 00:00:33 +0100
Speaker:
Prof. G. Bianchi
Abstract:
In this talk we address the following question: What is the minimum theoretical delay performance achievable by an overlay peertopeer streaming system where the streamed content is subdivided into chunks? We first start to show that, when posed for chunkbased systems, and as a consequence of the storeandforward way in which chunks are delivered across the network, this question has a fundamentally different answer with respect to the case of systems where the streamed content is distributed through one or more flows (substreams). We then proceed by defining a convenient performance metric, called "stream diffusion metric", which is directly related to the endtoend minimum delay achievable in a P2P streaming network, but which allows us to circumvent the complexity emerging when directly dealing with delay. We further derive a performance bound for such metric, and we show how this bound relates to two fundamental parameters: the upload bandwidth available at each node, and the number of neighbors a node may deliver chunks to. Quite interestingly, in this bound, nstep Fibonacci sequences play a key role, and appear to set the laws that characterize the optimal operation of chunkbased systems. Finally, we constructively show by means of which topologies and system operation this bound is attainable.
Prof. G. Bianchi
no
1:15:59
florian@knorn.org (Hamilton Institute)Speaker: Prof. G. Bianchi Abstract: In this talk we address the following question: What is the minimum theoretical delay performance achievable by an overlay peertopeer streaming system where the streamed content is subdivided into chunks? We first start to show that, when posed for chunkbased systems, and as a consequence of the storeandforward way in which chunks are delivered across the network, this question has a fundamentally different answer with respect to the case of systems where the streamed content is distributed through one or more flows (substreams). We then proceed by defining a convenient performance metric, called "stream diffusion metric", which is directly related to the endtoend minimum delay achievable in a P2P streaming network, but which allows us to circumvent the complexity emerging when directly dealing with delay. We further derive a performance bound for such metric, and we show how this bound relates to two fundamental parameters: the upload bandwidth available at each node, and the number of neighbors a node may deliver chunks to. Quite interestingly, in this bound, nstep Fibonacci sequences play a key role, and appear to set the laws that characterize the optimal operation of chunkbased systems. Finally, we constructively show by means of which topologies and system operation this bound is attainable.Speaker: Prof. G. Bianchi Abstract: In this talk we address the following question: What is the minimum theoretical delay performance achievable by an overlay peertopeer streaming system where the streamed content is subdivided into chunks? We first start to show that, when posed for chunkbased systems, and as a consequence of the storeandforward way in which chunks are delivered across the network, this question has a fundamentally different answer with respect to the case of systems where the streamed content is distributed through one or more flows (substreams). We then proceed by defining a convenient performance metric, called "stream diffusion metric", which is directly related to the endtoend minimum delay achievable in a P2P streaming network, but which allows us to circumvent the complexity emerging when directly dealing with delay. We further derive a performance bound for such metric, and we show how this bound relates to two fundamental parameters: the upload bandwidth available at each node, and the number of neighbors a node may deliver chunks to. Quite interestingly, in this bound, nstep Fibonacci sequences play a key role, and appear to set the laws that characterize the optimal operation of chunkbased systems. Finally, we constructively show by means of which topologies and system operation this bound is attainable.Seminars,Talks,Presentations,Hamilton,Institute

Robot Navigation and Mapping
http://www.hamilton.ie/seminars/videos/32j_leonard_lo.mp4
http://www.hamilton.ie/seminars/videos/32j_leonard_lo.mp4
Tue, 09 Aug 2011 00:00:32 +0100
Speaker:
Prof. J. Leonard
Abstract:
This talk will have two parts. In part one, we will review recent progress in mobile robotics, focusing on the problems of simultaneous mapping and localization (SLAM) and cooperative navigation of mobile sensor networks. The problem of SLAM is stated as follows: starting from an initial position, a mobile robot travels through a sequence of positions and obtains a set of sensor measurements at each position. The goal is for the mobile robot to process the sensor data to compute an estimate of its position while concurrently building a map of the environment. We will present SLAM results for several scenarios including land robot mapping of largescale environments and undersea mapping using optical imaging sensors. We will also describe work on cooperative navigation for networks of autonomous underwater vehicles (AUVs) and autonomous seasurface vehicles (ASVs).
In the second part of the talk, we will provide an overview of MIT's entry in the 2007 DARPA Urban Challenge. The goal of this effort was to produce a car that can drive autonomously in traffic. Our team developed a novel strategy for using a large number of many inexpensive sensors, mounted on the vehicle periphery, and calibrated with a new crossmodal calibration technique. Lidar, camera, and radar data streams are processed using an innovative, locally smooth state representation that provides robust perception for realtime autonomous control. A resilient planning and control architecture has been developed for driving in traffic, comprised of an innovative combination of wellproven algorithms for mission planning, situational planning, situational interpretation, and trajectory control. The performance of our system in the NQE and race events will be reviewed, and ideas for future research will be discussed.
For more information, see http://grandchallenge.mit.edu
Joint work with Seth Teller, Michael Bosse, Paul Newman, Ryan Eustice, Matthew Walter, Hanumant Singh, Henrik Schmidt, Mike Benjamin, Alexander Bahr, Joseph Curcio, Andrew Patrikalakis, Matt Antone, David Barrett, Mitch Berger, Ryan Buckley, Stefan Campbell, Alexander Epstein, Gaston Fiore, Luke Fletcher, Emilio Frazzoli, Robert Galejs, Jonathan How, Albert Huang, Karl Iagnemma, Troy Jones, Sertac Karaman, Olivier Koch, Siddhartha Krishnamurthy, Yoshi Kuwata, Keoni Maheloni, David Moore, Katy Moyer, Edwin Olson, Andrew Patrikalakis, Steve Peters, Stephen Proulx, Nicholas Roy, Daniela Rus, Chris Sanders, Seth Teller, Justin Teo, Robert Truax, Matthew Walter, and Jonathan Williams.
Prof. J. Leonard
no
1:05:33
florian@knorn.org (Hamilton Institute)Speaker: Prof. J. Leonard Abstract: This talk will have two parts. In part one, we will review recent progress in mobile robotics, focusing on the problems of simultaneous mapping and localization (SLAM) and cooperative navigation of mobile sensor networks. The problem of SLAM is stated as follows: starting from an initial position, a mobile robot travels through a sequence of positions and obtains a set of sensor measurements at each position. The goal is for the mobile robot to process the sensor data to compute an estimate of its position while concurrently building a map of the environment. We will present SLAM results for several scenarios including land robot mapping of largescale environments and undersea mapping using optical imaging sensors. We will also describe work on cooperative navigation for networks of autonomous underwater vehicles (AUVs) and autonomous seasurface vehicles (ASVs). In the second part of the talk, we will provide an overview of MIT's entry in the 2007 DARPA Urban Challenge. The goal of this effort was to produce a car that can drive autonomously in traffic. Our team developed a novel strategy for using a large number of many inexpensive sensors, mounted on the vehicle periphery, and calibrated with a new crossmodal calibration technique. Lidar, camera, and radar data streams are processed using an innovative, locally smooth state representation that provides robust perception for realtime autonomous control. A resilient planning and control architecture has been developed for driving in traffic, comprised of an innovative combination of wellproven algorithms for mission planning, situational planning, situational interpretation, and trajectory control. The performance of our system in the NQE and race events will be reviewed, and ideas for future research will be discussed. For more information, see http://grandchallenge.mit.edu Joint work with Seth Teller, Michael Bosse, Paul Newman, Ryan Eustice, Matthew Walter, Hanumant Singh, Henrik Schmidt, Mike Benjamin, Alexander Bahr, Joseph Curcio, Andrew Patrikalakis, Matt Antone, David Barrett, Mitch Berger, Ryan Buckley, Stefan Campbell, Alexander Epstein, Gaston Fiore, Luke Fletcher, Emilio Frazzoli, Robert Galejs, Jonathan How, Albert Huang, Karl Iagnemma, Troy Jones, Sertac Karaman, Olivier Koch, Siddhartha Krishnamurthy, Yoshi Kuwata, Keoni Maheloni, David Moore, Katy Moyer, Edwin Olson, Andrew Patrikalakis, Steve Peters, Stephen Proulx, Nicholas Roy, Daniela Rus, Chris Sanders, Seth Teller, Justin Teo, Robert Truax, Matthew Walter, and Jonathan Williams.Speaker: Prof. J. Leonard Abstract: This talk will have two parts. In part one, we will review recent progress in mobile robotics, focusing on the problems of simultaneous mapping and localization (SLAM) and cooperative navigation of mobile sensor networks. The problem of SLAM is stated as follows: starting from an initial position, a mobile robot travels through a sequence of positions and obtains a set of sensor measurements at each position. The goal is for the mobile robot to process the sensor data to compute an estimate of its position while concurrently building a map of the environment. We will present SLAM results for several scenarios including land robot mapping of largescale environments and undersea mapping using optical imaging sensors. We will also describe work on cooperative navigation for networks of autonomous underwater vehicles (AUVs) and autonomous seasurface vehicles (ASVs). In the second part of the talk, we will provide an overview of MIT's entry in the 2007 DARPA Urban Challenge. The goal of this effort was to produce a car that can drive autonomously in traffic. Our team developed a novel strategy for using a large number of many inexpensive sensors, mounted on the vehicle periphery, and calibrated with a new crossmodal calibration technique. Lidar, camera, and radar data streams are processed using an innovative, locally smooth state representation that provides robust perception for realtime autonomous control. A resilient planning and control architecture has been developed for driving in traffic, comprised of an innovative combination of wellproven algorithms for mission planning, situational planning, situational interpretation, and trajectory control. The performance of our system in the NQE and race events will be reviewed, and ideas for future research will be discussed. For more information, see http://grandchallenge.mit.edu Joint work with Seth Teller, Michael Bosse, Paul Newman, Ryan Eustice, Matthew Walter, Hanumant Singh, Henrik Schmidt, Mike Benjamin, Alexander Bahr, Joseph Curcio, Andrew Patrikalakis, Matt Antone, David Barrett, Mitch Berger, Ryan Buckley, Stefan Campbell, Alexander Epstein, Gaston Fiore, Luke Fletcher, Emilio Frazzoli, Robert Galejs, Jonathan How, Albert Huang, Karl Iagnemma, Troy Jones, Sertac Karaman, Olivier Koch, Siddhartha Krishnamurthy, Yoshi Kuwata, Keoni Maheloni, David Moore, Katy Moyer, Edwin Olson, Andrew Patrikalakis, Steve Peters, Stephen Proulx, Nicholas Roy, Daniela Rus, Chris Sanders, Seth Teller, Justin Teo, Robert Truax, Matthew Walter, and Jonathan Williams.Seminars,Talks,Presentations,Hamilton,Institute

Humanoid Robot Soccer 101
http://www.hamilton.ie/seminars/videos/31t_roefer_lo.mp4
http://www.hamilton.ie/seminars/videos/31t_roefer_lo.mp4
Tue, 09 Aug 2011 00:00:31 +0100
Speaker:
Dr. T. Röfer
Abstract:
Building the software for a competitive robot soccer team is a challenging task. The robots have to perceive their environment, estimate where they and the other relevant object are located on the field, decide what to do, and execute those decisions. All this has to happen in realtime, onboard the robots, with limited computing power, and not only for a single robot, but for the whole team. The lecture will give a survey of these tasks, using the methods used by the team BHuman in the RoboCup Standard Platform League as an example.
Dr. T. Röfer
no
1:18:19
florian@knorn.org (Hamilton Institute)Speaker: Dr. T. Röfer Abstract: Building the software for a competitive robot soccer team is a challenging task. The robots have to perceive their environment, estimate where they and the other relevant object are located on the field, decide what to do, and execute those decisions. All this has to happen in realtime, onboard the robots, with limited computing power, and not only for a single robot, but for the whole team. The lecture will give a survey of these tasks, using the methods used by the team BHuman in the RoboCup Standard Platform League as an example.Speaker: Dr. T. Röfer Abstract: Building the software for a competitive robot soccer team is a challenging task. The robots have to perceive their environment, estimate where they and the other relevant object are located on the field, decide what to do, and execute those decisions. All this has to happen in realtime, onboard the robots, with limited computing power, and not only for a single robot, but for the whole team. The lecture will give a survey of these tasks, using the methods used by the team BHuman in the RoboCup Standard Platform League as an example.Seminars,Talks,Presentations,Hamilton,Institute

An Introduction to R
http://www.hamilton.ie/seminars/videos/30c_walz_lo.mp4
http://www.hamilton.ie/seminars/videos/30c_walz_lo.mp4
Fri, 03 Jun 2011 00:00:30 +0100
Speaker:
C. Walz
Abstract:
A first introduction to R.
C. Walz
no
59:01
florian@knorn.org (Hamilton Institute)Speaker: C. Walz Abstract: A first introduction to R.Speaker: C. Walz Abstract: A first introduction to R.Seminars,Talks,Presentations,Hamilton,Institute

Lifecycle of HIVinfected cells
http://www.hamilton.ie/seminars/videos/29j_petravic_lo.mp4
http://www.hamilton.ie/seminars/videos/29j_petravic_lo.mp4
Sat, 05 Mar 2011 00:00:29 +0000
Speaker:
Dr. J. Petravic
Abstract:
In HIV dynamics models, it is commonly assumed that HIVinfected cells all have the same viral production and death rates. We explored the dynamics of viral production and death in vitro to determine the validity of this assumption. We infected human cells with HIV1 constructs that expressed enhanced green fluorescent protein (EGFP) and determined the amount of viral proteins produced by infected cells. Analysis of the flow cytometry data showed that the productively infected cells exhibited a broad, approximately lognormal distribution of viral protein content (spanning several orders of magnitude) that changed its shape and mean fluorescence intensity over time, and that population death rate apparently did not correlate with its mean EGFP content.
We assumed that the observed EGFP fluorescence level represented the balance of protein production and degradation. In our model of the infected cell population, EGFP fluorescence distribution at any time depended on probability distributions of four independent parameters: time to the start of protein production, protein production and degradation rates, and the lifespan of infected cells. After exploration of possible combinations of parameter distributions, we found that a distribution of protein production rates that is negatively correlated to the times to start of production of viral can explain the observed time course of the distribution of EGFP intensity.
Dr. J. Petravic
no
54:43
florian@knorn.org (Hamilton Institute)Speaker: Dr. J. Petravic Abstract: In HIV dynamics models, it is commonly assumed that HIVinfected cells all have the same viral production and death rates. We explored the dynamics of viral production and death in vitro to determine the validity of this assumption. We infected human cells with HIV1 constructs that expressed enhanced green fluorescent protein (EGFP) and determined the amount of viral proteins produced by infected cells. Analysis of the flow cytometry data showed that the productively infected cells exhibited a broad, approximately lognormal distribution of viral protein content (spanning several orders of magnitude) that changed its shape and mean fluorescence intensity over time, and that population death rate apparently did not correlate with its mean EGFP content. We assumed that the observed EGFP fluorescence level represented the balance of protein production and degradation. In our model of the infected cell population, EGFP fluorescence distribution at any time depended on probability distributions of four independent parameters: time to the start of protein production, protein production and degradation rates, and the lifespan of infected cells. After exploration of possible combinations of parameter distributions, we found that a distribution of protein production rates that is negatively correlated to the times to start of production of viral can explain the observed time course of the distribution of EGFP intensity.Speaker: Dr. J. Petravic Abstract: In HIV dynamics models, it is commonly assumed that HIVinfected cells all have the same viral production and death rates. We explored the dynamics of viral production and death in vitro to determine the validity of this assumption. We infected human cells with HIV1 constructs that expressed enhanced green fluorescent protein (EGFP) and determined the amount of viral proteins produced by infected cells. Analysis of the flow cytometry data showed that the productively infected cells exhibited a broad, approximately lognormal distribution of viral protein content (spanning several orders of magnitude) that changed its shape and mean fluorescence intensity over time, and that population death rate apparently did not correlate with its mean EGFP content. We assumed that the observed EGFP fluorescence level represented the balance of protein production and degradation. In our model of the infected cell population, EGFP fluorescence distribution at any time depended on probability distributions of four independent parameters: time to the start of protein production, protein production and degradation rates, and the lifespan of infected cells. After exploration of possible combinations of parameter distributions, we found that a distribution of protein production rates that is negatively correlated to the times to start of production of viral can explain the observed time course of the distribution of EGFP intensity.Seminars,Talks,Presentations,Hamilton,Institute

Advances in nonlinear distortion methods of synthesis and processing of musical signals
http://www.hamilton.ie/seminars/videos/28v_lazzarini_lo.mp4
http://www.hamilton.ie/seminars/videos/28v_lazzarini_lo.mp4
Wed, 23 Mar 2011 00:00:28 +0000
Speaker:
Dr. V. Lazzarini
Abstract:
Nonlinear distortion methods form a set of elegant and computationally economic methods of synthesis and processing for musical applications. Among these, we find the famous Frequency Modulation synthesis, as developed by Chowning and made popular by Yamaha. In addition, various other techniques, including Discrete Summation Formulae, Waveshaping and Phase distortion, can be cast in the same group (and often be given alternative interpretations) of nonlinear distortion methods. Research in the area has been very limited since the mid nineties, until a recent series of developments spurred new interest in these ideas. In this talk, I will first introduce briefly the principles of nonlinear distortion, providing an overview of the area. I will then follow this with a tour of recent work, which will include adaptive methods, virtual analogue models and analysissynthesis applications.
Dr. V. Lazzarini
no
1:06:27
florian@knorn.org (Hamilton Institute)Speaker: Dr. V. Lazzarini Abstract: Nonlinear distortion methods form a set of elegant and computationally economic methods of synthesis and processing for musical applications. Among these, we find the famous Frequency Modulation synthesis, as developed by Chowning and made popular by Yamaha. In addition, various other techniques, including Discrete Summation Formulae, Waveshaping and Phase distortion, can be cast in the same group (and often be given alternative interpretations) of nonlinear distortion methods. Research in the area has been very limited since the mid nineties, until a recent series of developments spurred new interest in these ideas. In this talk, I will first introduce briefly the principles of nonlinear distortion, providing an overview of the area. I will then follow this with a tour of recent work, which will include adaptive methods, virtual analogue models and analysissynthesis applications.Speaker: Dr. V. Lazzarini Abstract: Nonlinear distortion methods form a set of elegant and computationally economic methods of synthesis and processing for musical applications. Among these, we find the famous Frequency Modulation synthesis, as developed by Chowning and made popular by Yamaha. In addition, various other techniques, including Discrete Summation Formulae, Waveshaping and Phase distortion, can be cast in the same group (and often be given alternative interpretations) of nonlinear distortion methods. Research in the area has been very limited since the mid nineties, until a recent series of developments spurred new interest in these ideas. In this talk, I will first introduce briefly the principles of nonlinear distortion, providing an overview of the area. I will then follow this with a tour of recent work, which will include adaptive methods, virtual analogue models and analysissynthesis applications.Seminars,Talks,Presentations,Hamilton,Institute

Programming stem cells: modeling stem cell dynamics and organ development
http://www.hamilton.ie/seminars/videos/27y_setty_lo.mp4
http://www.hamilton.ie/seminars/videos/27y_setty_lo.mp4
Wed, 23 Feb 2011 00:00:27 +0000
Speaker:
Dr. Y. Setty
Abstract:
In recent years, we have used software engineering tools to develop reactive models to simulate and analyze the development of organs. The modeled systems embody highly complex and dynamic processes, by which a set of precursor stem cells proliferate, differentiate and move, to form a functioning tissue. Three organs from diverse evolutionary organisms have been thus modeled: the mouse pancreas, the C. elegans gonad, and partial rodent brain development. Analysis and execution of the models provided dynamic representation of the development, anticipated known experimental results and proposed novel testable predictions. In my talk, I will l discuss challenges, goals and achievement in this direction in science.
Dr. Y. Setty
no
40:32
florian@knorn.org (Hamilton Institute)Speaker: Dr. Y. Setty Abstract: In recent years, we have used software engineering tools to develop reactive models to simulate and analyze the development of organs. The modeled systems embody highly complex and dynamic processes, by which a set of precursor stem cells proliferate, differentiate and move, to form a functioning tissue. Three organs from diverse evolutionary organisms have been thus modeled: the mouse pancreas, the C. elegans gonad, and partial rodent brain development. Analysis and execution of the models provided dynamic representation of the development, anticipated known experimental results and proposed novel testable predictions. In my talk, I will l discuss challenges, goals and achievement in this direction in science.Speaker: Dr. Y. Setty Abstract: In recent years, we have used software engineering tools to develop reactive models to simulate and analyze the development of organs. The modeled systems embody highly complex and dynamic processes, by which a set of precursor stem cells proliferate, differentiate and move, to form a functioning tissue. Three organs from diverse evolutionary organisms have been thus modeled: the mouse pancreas, the C. elegans gonad, and partial rodent brain development. Analysis and execution of the models provided dynamic representation of the development, anticipated known experimental results and proposed novel testable predictions. In my talk, I will l discuss challenges, goals and achievement in this direction in science.Seminars,Talks,Presentations,Hamilton,Institute

Vehicle2x Communication
http://www.hamilton.ie/seminars/videos/26i_radusch_lo.mp4
http://www.hamilton.ie/seminars/videos/26i_radusch_lo.mp4
Fri, 18 Feb 2011 00:00:26 +0000
Speaker:
Dr. I. Radusch
Abstract:
Future drivers and vehicles will benefit from upcoming integrated communication devices threefold. Communication will increase safety and efficiency in traffic as well as making driving more enjoyable. Upcoming field operational tests will assess if available standards and implementations are ready for wide scale deployment. Additionally, simulation environments such as VSimRTI allow comprehensive prevalidation of novel vehicle functions utilizing vehicle2x communication.
Dr. I. Radusch
no
1:09:21
florian@knorn.org (Hamilton Institute)Speaker: Dr. I. Radusch Abstract: Future drivers and vehicles will benefit from upcoming integrated communication devices threefold. Communication will increase safety and efficiency in traffic as well as making driving more enjoyable. Upcoming field operational tests will assess if available standards and implementations are ready for wide scale deployment. Additionally, simulation environments such as VSimRTI allow comprehensive prevalidation of novel vehicle functions utilizing vehicle2x communication.Speaker: Dr. I. Radusch Abstract: Future drivers and vehicles will benefit from upcoming integrated communication devices threefold. Communication will increase safety and efficiency in traffic as well as making driving more enjoyable. Upcoming field operational tests will assess if available standards and implementations are ready for wide scale deployment. Additionally, simulation environments such as VSimRTI allow comprehensive prevalidation of novel vehicle functions utilizing vehicle2x communication.Seminars,Talks,Presentations,Hamilton,Institute

EventDriven Automation in LaserScanning Microscopy Applied to Live Cell Imaging
http://www.hamilton.ie/seminars/videos/25j_wenus_lo.mp4
http://www.hamilton.ie/seminars/videos/25j_wenus_lo.mp4
Wed, 15 Dec 2010 00:00:25 +0000
Speaker:
Dr. J. Wenus
Abstract:
Microscopy of living cells is heavily employed in biomedicine to understand the mechanisms of disease progression and to develop novel pharmaceuticals. In particular, confocal microscopy which relies on laserbased excitation of fluorescent cellular biomarkers is frequently used for understanding molecular actions of therapeutic drugs to abnormal cells. However, prolonged exposure to highly energetic laser radiation often leads to light induced cell death before any spontaneous effects can occur  an effect known as 'phototoxicity'. To address this problem we have developed an automated livecell imaging system 'ALISSA' which employs online image processing and analysis to automatically detect biological events and then trigger appropriate changes in the image acquisition settings. This way we minimize the phototoxicity, obtain higher quality of the imaging data and minimize direct user involvement by introducing more automation to the whole experimental process. So far, ALISSA has been used in studies on cancer cells and neurons at the Royal College of Surgeons in Ireland and it is currently under development aimed towards applications in commercial high content screening systems.
This is a joint work between the RCSI, Dublin (H. Huber, H. Duessmann, J. Prehn) and the Hamilton Institute, NUI Maynooth (J. Wenus, P. Paul, D. Kalamatianos, P. Wellstead) with involvement from Siemens and Carl Zeiss MicroImaging.
We gratefully acknowledge financial support from the National Biophotonics and Imaging Platform Ireland (HEA PRTLI Cycle 4).
Dr. J. Wenus
no
38:29
florian@knorn.org (Hamilton Institute)Speaker: Dr. J. Wenus Abstract: Microscopy of living cells is heavily employed in biomedicine to understand the mechanisms of disease progression and to develop novel pharmaceuticals. In particular, confocal microscopy which relies on laserbased excitation of fluorescent cellular biomarkers is frequently used for understanding molecular actions of therapeutic drugs to abnormal cells. However, prolonged exposure to highly energetic laser radiation often leads to light induced cell death before any spontaneous effects can occur  an effect known as 'phototoxicity'. To address this problem we have developed an automated livecell imaging system 'ALISSA' which employs online image processing and analysis to automatically detect biological events and then trigger appropriate changes in the image acquisition settings. This way we minimize the phototoxicity, obtain higher quality of the imaging data and minimize direct user involvement by introducing more automation to the whole experimental process. So far, ALISSA has been used in studies on cancer cells and neurons at the Royal College of Surgeons in Ireland and it is currently under development aimed towards applications in commercial high content screening systems. This is a joint work between the RCSI, Dublin (H. Huber, H. Duessmann, J. Prehn) and the Hamilton Institute, NUI Maynooth (J. Wenus, P. Paul, D. Kalamatianos, P. Wellstead) with involvement from Siemens and Carl Zeiss MicroImaging. We gratefully acknowledge financial support from the National Biophotonics and Imaging Platform Ireland (HEA PRTLI Cycle 4).Speaker: Dr. J. Wenus Abstract: Microscopy of living cells is heavily employed in biomedicine to understand the mechanisms of disease progression and to develop novel pharmaceuticals. In particular, confocal microscopy which relies on laserbased excitation of fluorescent cellular biomarkers is frequently used for understanding molecular actions of therapeutic drugs to abnormal cells. However, prolonged exposure to highly energetic laser radiation often leads to light induced cell death before any spontaneous effects can occur  an effect known as 'phototoxicity'. To address this problem we have developed an automated livecell imaging system 'ALISSA' which employs online image processing and analysis to automatically detect biological events and then trigger appropriate changes in the image acquisition settings. This way we minimize the phototoxicity, obtain higher quality of the imaging data and minimize direct user involvement by introducing more automation to the whole experimental process. So far, ALISSA has been used in studies on cancer cells and neurons at the Royal College of Surgeons in Ireland and it is currently under development aimed towards applications in commercial high content screening systems. This is a joint work between the RCSI, Dublin (H. Huber, H. Duessmann, J. Prehn) and the Hamilton Institute, NUI Maynooth (J. Wenus, P. Paul, D. Kalamatianos, P. Wellstead) with involvement from Siemens and Carl Zeiss MicroImaging. We gratefully acknowledge financial support from the National Biophotonics and Imaging Platform Ireland (HEA PRTLI Cycle 4).Seminars,Talks,Presentations,Hamilton,Institute

Spectrum Sharing in Cognitive Radio with Quantized Channel Information
http://www.hamilton.ie/seminars/videos/23s_dey_lo.mp4
http://www.hamilton.ie/seminars/videos/23s_dey_lo.mp4
Thu, 15 Jul 2010 00:00:23 +0100
Speaker:
Dr. S. Dey
Abstract:
In this talk, we consider a wideband spectrum sharing system where a secondary user can share a number of orthogonal frequency bands each licensed to a distinct primary user. We address the problem of optimum secondary transmit power allocation for its ergodic capacity maximization subject to an average sum (across the bands) transmit power constraint and individual average interference constraints on the primary users. The major contribution of our work lies in considering quantized channel state information (CSI) (for the vector channel space consisting of all secondarytosecondary and secondarytoprimary channels) at the secondary transmitter as opposed to the prevalent assumption of full CSI in most existing work.It is assumed that a band manager or a cognitive radio service provider has access to the full CSI information from the secondary and primary receivers and designs (offline) an optimal power codebook based on the statistical information(channel distributions) of the channels and feeds back the index of the codebook to the secondary transmitter for every channel realization in realtime, via a delayfree noiseless limited feedback channel. A modified Generalized Lloydstype algorithm (GLA) is designed for deriving the optimal power codebook,which is proved to be globally convergent and empirically consistent. An approximate quantized power allocation (AQPA) algorithm is presented,that performs very close to its GLA based counterpart for large number of feedback bits and is significantly faster.We also present an extension of the modified GLA based quantized power codebook design algorithm for the case when the feedback channel is noisy. Numerical studies illustrate that with only 34 bits of feedback, the modified GLA based algorithms provide secondary ergodic capacity very close to that achieved by full CSI and with only as little as 4 bits of feedback, AQPA provides a comparable performance,thus making it an attractive choice for practical implementation.Various open problems and future research directions will also be discussed.
Dr. S. Dey
no
59:12
florian@knorn.org (Hamilton Institute)Speaker: Dr. S. Dey Abstract: In this talk, we consider a wideband spectrum sharing system where a secondary user can share a number of orthogonal frequency bands each licensed to a distinct primary user. We address the problem of optimum secondary transmit power allocation for its ergodic capacity maximization subject to an average sum (across the bands) transmit power constraint and individual average interference constraints on the primary users. The major contribution of our work lies in considering quantized channel state information (CSI) (for the vector channel space consisting of all secondarytosecondary and secondarytoprimary channels) at the secondary transmitter as opposed to the prevalent assumption of full CSI in most existing work.It is assumed that a band manager or a cognitive radio service provider has access to the full CSI information from the secondary and primary receivers and designs (offline) an optimal power codebook based on the statistical information(channel distributions) of the channels and feeds back the index of the codebook to the secondary transmitter for every channel realization in realtime, via a delayfree noiseless limited feedback channel. A modified Generalized Lloydstype algorithm (GLA) is designed for deriving the optimal power codebook,which is proved to be globally convergent and empirically consistent. An approximate quantized power allocation (AQPA) algorithm is presented,that performs very close to its GLA based counterpart for large number of feedback bits and is significantly faster.We also present an extension of the modified GLA based quantized power codebook design algorithm for the case when the feedback channel is noisy. Numerical studies illustrate that with only 34 bits of feedback, the modified GLA based algorithms provide secondary ergodic capacity very close to that achieved by full CSI and with only as little as 4 bits of feedback, AQPA provides a comparable performance,thus making it an attractive choice for practical implementation.Various open problems and future research directions will also be discussed.Speaker: Dr. S. Dey Abstract: In this talk, we consider a wideband spectrum sharing system where a secondary user can share a number of orthogonal frequency bands each licensed to a distinct primary user. We address the problem of optimum secondary transmit power allocation for its ergodic capacity maximization subject to an average sum (across the bands) transmit power constraint and individual average interference constraints on the primary users. The major contribution of our work lies in considering quantized channel state information (CSI) (for the vector channel space consisting of all secondarytosecondary and secondarytoprimary channels) at the secondary transmitter as opposed to the prevalent assumption of full CSI in most existing work.It is assumed that a band manager or a cognitive radio service provider has access to the full CSI information from the secondary and primary receivers and designs (offline) an optimal power codebook based on the statistical information(channel distributions) of the channels and feeds back the index of the codebook to the secondary transmitter for every channel realization in realtime, via a delayfree noiseless limited feedback channel. A modified Generalized Lloydstype algorithm (GLA) is designed for deriving the optimal power codebook,which is proved to be globally convergent and empirically consistent. An approximate quantized power allocation (AQPA) algorithm is presented,that performs very close to its GLA based counterpart for large number of feedback bits and is significantly faster.We also present an extension of the modified GLA based quantized power codebook design algorithm for the case when the feedback channel is noisy. Numerical studies illustrate that with only 34 bits of feedback, the modified GLA based algorithms provide secondary ergodic capacity very close to that achieved by full CSI and with only as little as 4 bits of feedback, AQPA provides a comparable performance,thus making it an attractive choice for practical implementation.Various open problems and future research directions will also be discussed.Seminars,Talks,Presentations,Hamilton,Institute

Large deviation theory and its applications in statistical mechanics
http://www.hamilton.ie/seminars/videos/22h_touchette_lo.mp4
http://www.hamilton.ie/seminars/videos/22h_touchette_lo.mp4
Wed, 24 Mar 2010 00:00:22 +0000
Speaker:
Dr. H. Touchette
Abstract:
The theory of large deviations, initiated by Cramer in the 1930s and later developed by Donsker and Varadhan in the 1970s, is an active field of probability theory that finds applications in many subjects, including statistics, finance, actuarial mathematics, engineering, and physics. Its use in physics dates back to the work of Ruelle, Lanford, and the late John Lewis, among others, who used concepts of large deviations in the 1970s and 1980s to study equilibrium systems and to put statistical mechanics on a rigorous footing.
I will give in this talk a survey of these applications, as well as more recent ones related to longrange equilibrium systems and nonequilibrium systems, at a level which assumes little knowledge of statistical mechanics or large deviations. As we cover these applications, we will see that large deviation theory and statistical mechanics share a common mathematical structure, which Lewis was well aware of, and which can be summarized by saying that an entropy function is to a physicist what a large deviation function (or rate function) is to a mathematician. Other connections of this sort will be discussed.
Dr. H. Touchette
no
54:21
florian@knorn.org (Hamilton Institute)Speaker: Dr. H. Touchette Abstract: The theory of large deviations, initiated by Cramer in the 1930s and later developed by Donsker and Varadhan in the 1970s, is an active field of probability theory that finds applications in many subjects, including statistics, finance, actuarial mathematics, engineering, and physics. Its use in physics dates back to the work of Ruelle, Lanford, and the late John Lewis, among others, who used concepts of large deviations in the 1970s and 1980s to study equilibrium systems and to put statistical mechanics on a rigorous footing. I will give in this talk a survey of these applications, as well as more recent ones related to longrange equilibrium systems and nonequilibrium systems, at a level which assumes little knowledge of statistical mechanics or large deviations. As we cover these applications, we will see that large deviation theory and statistical mechanics share a common mathematical structure, which Lewis was well aware of, and which can be summarized by saying that an entropy function is to a physicist what a large deviation function (or rate function) is to a mathematician. Other connections of this sort will be discussed.Speaker: Dr. H. Touchette Abstract: The theory of large deviations, initiated by Cramer in the 1930s and later developed by Donsker and Varadhan in the 1970s, is an active field of probability theory that finds applications in many subjects, including statistics, finance, actuarial mathematics, engineering, and physics. Its use in physics dates back to the work of Ruelle, Lanford, and the late John Lewis, among others, who used concepts of large deviations in the 1970s and 1980s to study equilibrium systems and to put statistical mechanics on a rigorous footing. I will give in this talk a survey of these applications, as well as more recent ones related to longrange equilibrium systems and nonequilibrium systems, at a level which assumes little knowledge of statistical mechanics or large deviations. As we cover these applications, we will see that large deviation theory and statistical mechanics share a common mathematical structure, which Lewis was well aware of, and which can be summarized by saying that an entropy function is to a physicist what a large deviation function (or rate function) is to a mathematician. Other connections of this sort will be discussed.Seminars,Talks,Presentations,Hamilton,Institute

Asymptotic Stability Region of Slotted Aloha
http://www.hamilton.ie/seminars/videos/21c_bordenave_lo.mp4
http://www.hamilton.ie/seminars/videos/21c_bordenave_lo.mp4
Wed, 03 Mar 2010 00:00:21 +0000
Speaker:
Dr. C. Bordenave
Abstract:
Consider N queues with nonhomogeneous packet arrivals. The queues share a common communication channel. At the beginning of each timeslot, if queue i has a packet, it attempts to access the channel with probability p_i. This attempt is successful when no other queue attempts to access the channel. For arbitrary N, the stability region of such queuing system is a long standing open problem. However as the number of queues N goes to infinity, it is possible to compute the asymptotic stability region.
This is a joint work with David McDonald (Ottawa) and Alexandre Proutiere (Microsoft).
Dr. C. Bordenave
no
56:21
florian@knorn.org (Hamilton Institute)Speaker: Dr. C. Bordenave Abstract: Consider N queues with nonhomogeneous packet arrivals. The queues share a common communication channel. At the beginning of each timeslot, if queue i has a packet, it attempts to access the channel with probability p_i. This attempt is successful when no other queue attempts to access the channel. For arbitrary N, the stability region of such queuing system is a long standing open problem. However as the number of queues N goes to infinity, it is possible to compute the asymptotic stability region. This is a joint work with David McDonald (Ottawa) and Alexandre Proutiere (Microsoft).Speaker: Dr. C. Bordenave Abstract: Consider N queues with nonhomogeneous packet arrivals. The queues share a common communication channel. At the beginning of each timeslot, if queue i has a packet, it attempts to access the channel with probability p_i. This attempt is successful when no other queue attempts to access the channel. For arbitrary N, the stability region of such queuing system is a long standing open problem. However as the number of queues N goes to infinity, it is possible to compute the asymptotic stability region. This is a joint work with David McDonald (Ottawa) and Alexandre Proutiere (Microsoft).Seminars,Talks,Presentations,Hamilton,Institute

On the stabilization of discretetime positive switched systems by means of Lyapunov based switching strategies
http://www.hamilton.ie/seminars/videos/20e_valcher_lo.mp4
http://www.hamilton.ie/seminars/videos/20e_valcher_lo.mp4
Fri, 19 Feb 2010 00:00:20 +0000
Speaker:
Prof. M. E. Valcher
Abstract:
Abstract: Positive switched systems typically arise to cope with two distinct modeling needs. On the one hand, switching among different models mathematically formalizes the fact that the system laws change under different operating conditions.On the other hand, the variables to be modeled may be quantities that have no meaning unless positive (temperatures, pressures,population levels, ...).
In this talk we consider the class of discretetime positive switched systems, described, at each time t, by the firstorder difference equation:
x(t+1) = A_{\sigma(t)} x(t),
where \sigma is a switching sequence, taking values in the finite set {1,2}, and for each index i, A_i is an n x n positive matrix. Assuming that both A_1 and A_2 are not Schur matrices, we focus on the stabilizability of the system, namely on the possibility of finding switching strategies that drive to zero the state evolution corresponding to every positive initial state x(0). To this end, we resort to state feedback switching laws, whose value at the time t depends on the value of some Lyapunov function in x(t).
We first explore quadratic positive definite functions, by extending a technique described by De Carlo et al.. Later, by taking advantage of the system positivity, we show that other classes of Lyapunov functions, such as linear copositive and quadratic copositive ones, may be used to design statedependent stabilizing switching laws, and some of them may be designed under weaker conditions on the pair of matrices (A_1,A_2) with respect to those required for quadratic stabilizability.
Some comparisons between the performances of the switching strategies are given.
Prof. M. E. Valcher
no
42:59
florian@knorn.org (Hamilton Institute)Speaker: Prof. M. E. Valcher Abstract: Abstract: Positive switched systems typically arise to cope with two distinct modeling needs. On the one hand, switching among different models mathematically formalizes the fact that the system laws change under different operating conditions.On the other hand, the variables to be modeled may be quantities that have no meaning unless positive (temperatures, pressures,population levels, ...). In this talk we consider the class of discretetime positive switched systems, described, at each time t, by the firstorder difference equation: x(t+1) = A_{\sigma(t)} x(t), where \sigma is a switching sequence, taking values in the finite set {1,2}, and for each index i, A_i is an n x n positive matrix. Assuming that both A_1 and A_2 are not Schur matrices, we focus on the stabilizability of the system, namely on the possibility of finding switching strategies that drive to zero the state evolution corresponding to every positive initial state x(0). To this end, we resort to state feedback switching laws, whose value at the time t depends on the value of some Lyapunov function in x(t). We first explore quadratic positive definite functions, by extending a technique described by De Carlo et al.. Later, by taking advantage of the system positivity, we show that other classes of Lyapunov functions, such as linear copositive and quadratic copositive ones, may be used to design statedependent stabilizing switching laws, and some of them may be designed under weaker conditions on the pair of matrices (A_1,A_2) with respect to those required for quadratic stabilizability. Some comparisons between the performances of the switching strategies are given.Speaker: Prof. M. E. Valcher Abstract: Abstract: Positive switched systems typically arise to cope with two distinct modeling needs. On the one hand, switching among different models mathematically formalizes the fact that the system laws change under different operating conditions.On the other hand, the variables to be modeled may be quantities that have no meaning unless positive (temperatures, pressures,population levels, ...). In this talk we consider the class of discretetime positive switched systems, described, at each time t, by the firstorder difference equation: x(t+1) = A_{\sigma(t)} x(t), where \sigma is a switching sequence, taking values in the finite set {1,2}, and for each index i, A_i is an n x n positive matrix. Assuming that both A_1 and A_2 are not Schur matrices, we focus on the stabilizability of the system, namely on the possibility of finding switching strategies that drive to zero the state evolution corresponding to every positive initial state x(0). To this end, we resort to state feedback switching laws, whose value at the time t depends on the value of some Lyapunov function in x(t). We first explore quadratic positive definite functions, by extending a technique described by De Carlo et al.. Later, by taking advantage of the system positivity, we show that other classes of Lyapunov functions, such as linear copositive and quadratic copositive ones, may be used to design statedependent stabilizing switching laws, and some of them may be designed under weaker conditions on the pair of matrices (A_1,A_2) with respect to those required for quadratic stabilizability. Some comparisons between the performances of the switching strategies are given.Seminars,Talks,Presentations,Hamilton,Institute

A Phylogenetic Hidden Markov Model for Immune Epitope Discovery
http://www.hamilton.ie/seminars/videos/19c_seoighe_lo.mp4
http://www.hamilton.ie/seminars/videos/19c_seoighe_lo.mp4
Wed, 09 Dec 2009 00:00:19 +0000
Speaker:
Prof. C. Seoighe
Abstract:
We describe a phylogenetic model of proteincoding sequence evolution that includes environmental variables. We apply it to a set of viral sequences from individuals with known human leukocyte antigen (HLA) genotype and include parameters to model selective pressures affecting mutations within immunogenic (epitope) regions that facilitate viral evasion of immune responses. We combine this evolutionary model with a hidden Markov model to identify regions of the HIV1 genome that evolve under immune pressure in the presence of specific HLA class I alleles and may therefore represent potential T cell epitopes. This phylogenetic hidden Markov model (phyloHMM) provides a probabilistic framework that can be combined with sequence or structural information to enhance epitope prediction.
Prof. C. Seoighe
no
1:11:37
florian@knorn.org (Hamilton Institute)Speaker: Prof. C. Seoighe Abstract: We describe a phylogenetic model of proteincoding sequence evolution that includes environmental variables. We apply it to a set of viral sequences from individuals with known human leukocyte antigen (HLA) genotype and include parameters to model selective pressures affecting mutations within immunogenic (epitope) regions that facilitate viral evasion of immune responses. We combine this evolutionary model with a hidden Markov model to identify regions of the HIV1 genome that evolve under immune pressure in the presence of specific HLA class I alleles and may therefore represent potential T cell epitopes. This phylogenetic hidden Markov model (phyloHMM) provides a probabilistic framework that can be combined with sequence or structural information to enhance epitope prediction.Speaker: Prof. C. Seoighe Abstract: We describe a phylogenetic model of proteincoding sequence evolution that includes environmental variables. We apply it to a set of viral sequences from individuals with known human leukocyte antigen (HLA) genotype and include parameters to model selective pressures affecting mutations within immunogenic (epitope) regions that facilitate viral evasion of immune responses. We combine this evolutionary model with a hidden Markov model to identify regions of the HIV1 genome that evolve under immune pressure in the presence of specific HLA class I alleles and may therefore represent potential T cell epitopes. This phylogenetic hidden Markov model (phyloHMM) provides a probabilistic framework that can be combined with sequence or structural information to enhance epitope prediction.Seminars,Talks,Presentations,Hamilton,Institute

Stochastic Modelling of T Cell Repertoire Diversity
http://www.hamilton.ie/seminars/videos/18c_molinaparis_lo.mp4
http://www.hamilton.ie/seminars/videos/18c_molinaparis_lo.mp4
Wed, 18 Nov 2009 00:00:18 +0000
Speaker:
Dr. C. MolinaParís
Abstract:
T cells are specialised white blood cells that protect the body from infection and are also able to kill infected cells. T cells are characterised by the presence of a special receptor on their cell surface called T cell receptor (TCR). The specificity of the T cell, namely which pathogens it can recognise, is determined by the molecular structure of its TCR. T cells can be classified according to their TCRs. All T cells that have identical TCRs are said to belong to the same clonotype. There are two types of T cells: naive and memory. Naive T cells have not yet encountered pathogens and memory T cells have already encountered pathogen. In this talk, I will only consider the class of naive T cells. A diverse naive T cell pool is essential to protect against novel infections, as the immune system cannot predict which pathogens the organism will be exposed to during its lifetime. A healthy adult human possesses approximately 10^(11) naive T cells, which belong to about 10^710^8 different clonotypes. The reliability of the immune response to pathogenic challenge depends critically on the size (how many cells) and diversity (how many different TCRs or clonotypes) of the naive T cell pool of the individual. Experimental evidence suggests that interactions between TCRs with selfpeptides (selfpeptide = a fragment of a household protein) displayed on the surface of specialised cells, called antigen presenting cells (APCs), are important in controlling naive T cell numbers. Naive T cells undergo one round of cell division after receiving a survival stimulus from these specialized APCs. Whether or not a particular naive T cell can receive a survival signal from an specialized APC depends both on the TCR it expresses and the array of selfpeptides displayed on the surface of the APC. Competition amongst naive T cells for these interactions regulates the diversity of the naive T cell pool.
We have made use of a probabilistic (stochastic) model to describe this competition. In particular, we have modeled the time evolution of the number of T cells belonging to a particular clonotype. Our results indicate that competition maximizes TCR diversity by promoting the survival of T cell clonotypes that are most different from each other in terms of the selfpeptides they are able to recognise.
Dr. C. MolinaParís
no
54:15
florian@knorn.org (Hamilton Institute)Speaker: Dr. C. MolinaParís Abstract: T cells are specialised white blood cells that protect the body from infection and are also able to kill infected cells. T cells are characterised by the presence of a special receptor on their cell surface called T cell receptor (TCR). The specificity of the T cell, namely which pathogens it can recognise, is determined by the molecular structure of its TCR. T cells can be classified according to their TCRs. All T cells that have identical TCRs are said to belong to the same clonotype. There are two types of T cells: naive and memory. Naive T cells have not yet encountered pathogens and memory T cells have already encountered pathogen. In this talk, I will only consider the class of naive T cells. A diverse naive T cell pool is essential to protect against novel infections, as the immune system cannot predict which pathogens the organism will be exposed to during its lifetime. A healthy adult human possesses approximately 10^(11) naive T cells, which belong to about 10^710^8 different clonotypes. The reliability of the immune response to pathogenic challenge depends critically on the size (how many cells) and diversity (how many different TCRs or clonotypes) of the naive T cell pool of the individual. Experimental evidence suggests that interactions between TCRs with selfpeptides (selfpeptide = a fragment of a household protein) displayed on the surface of specialised cells, called antigen presenting cells (APCs), are important in controlling naive T cell numbers. Naive T cells undergo one round of cell division after receiving a survival stimulus from these specialized APCs. Whether or not a particular naive T cell can receive a survival signal from an specialized APC depends both on the TCR it expresses and the array of selfpeptides displayed on the surface of the APC. Competition amongst naive T cells for these interactions regulates the diversity of the naive T cell pool. We have made use of a probabilistic (stochastic) model to describe this competition. In particular, we have modeled the time evolution of the number of T cells belonging to a particular clonotype. Our results indicate that competition maximizes TCR diversity by promoting the survival of T cell clonotypes that are most different from each other in terms of the selfpeptides they are able to recognise.Speaker: Dr. C. MolinaParís Abstract: T cells are specialised white blood cells that protect the body from infection and are also able to kill infected cells. T cells are characterised by the presence of a special receptor on their cell surface called T cell receptor (TCR). The specificity of the T cell, namely which pathogens it can recognise, is determined by the molecular structure of its TCR. T cells can be classified according to their TCRs. All T cells that have identical TCRs are said to belong to the same clonotype. There are two types of T cells: naive and memory. Naive T cells have not yet encountered pathogens and memory T cells have already encountered pathogen. In this talk, I will only consider the class of naive T cells. A diverse naive T cell pool is essential to protect against novel infections, as the immune system cannot predict which pathogens the organism will be exposed to during its lifetime. A healthy adult human possesses approximately 10^(11) naive T cells, which belong to about 10^710^8 different clonotypes. The reliability of the immune response to pathogenic challenge depends critically on the size (how many cells) and diversity (how many different TCRs or clonotypes) of the naive T cell pool of the individual. Experimental evidence suggests that interactions between TCRs with selfpeptides (selfpeptide = a fragment of a household protein) displayed on the surface of specialised cells, called antigen presenting cells (APCs), are important in controlling naive T cell numbers. Naive T cells undergo one round of cell division after receiving a survival stimulus from these specialized APCs. Whether or not a particular naive T cell can receive a survival signal from an specialized APC depends both on the TCR it expresses and the array of selfpeptides displayed on the surface of the APC. Competition amongst naive T cells for these interactions regulates the diversity of the naive T cell pool. We have made use of a probabilistic (stochastic) model to describe this competition. In particular, we have modeled the time evolution of the number of T cells belonging to a particular clonotype. Our results indicate that competition maximizes TCR diversity by promoting the survival of T cell clonotypes that are most different from each other in terms of the selfpeptides they are able to recognise.Seminars,Talks,Presentations,Hamilton,Institute

The Brain is an Embedding Machine
http://www.hamilton.ie/seminars/videos/17r_clement_lo.mp4
http://www.hamilton.ie/seminars/videos/17r_clement_lo.mp4
Wed, 30 Sep 2009 00:00:17 +0100
Speaker:
Dr. R. Clement
Abstract:
Neural responses are often generated by the physical movement of an object or a limb. Each such set of responses corresponds a point on a smooth geometrical surface. To be able to manipulate such a representation the brain assigns coordinates to every point on the surface  a procedure known as embedding.
In the first part of this talk the properties of the early visual system are exploited to produce a model of coordinate space based on features such as colour, orientation and movement. The feature model has the advantage over the geometric model that it is not restricted to 2 or 3dimensional pictorial representations.
The neural mechanism is highly suited to embedding. In the second part of the talk the feature based coordinate space will be used to explore the neural embedding of the sensory stimuli encountered in binocular vision and in the movement of the eye.
In the final part of the talk the limitations on our ability to see objects arising from the neural embedding procedures will be outlined and in particular, what can be "seen" of the shape of surfaces embedded in more than three dimensions.
Dr. R. Clement
no
40:30
florian@knorn.org (Hamilton Institute)Speaker: Dr. R. Clement Abstract: Neural responses are often generated by the physical movement of an object or a limb. Each such set of responses corresponds a point on a smooth geometrical surface. To be able to manipulate such a representation the brain assigns coordinates to every point on the surface  a procedure known as embedding. In the first part of this talk the properties of the early visual system are exploited to produce a model of coordinate space based on features such as colour, orientation and movement. The feature model has the advantage over the geometric model that it is not restricted to 2 or 3dimensional pictorial representations. The neural mechanism is highly suited to embedding. In the second part of the talk the feature based coordinate space will be used to explore the neural embedding of the sensory stimuli encountered in binocular vision and in the movement of the eye. In the final part of the talk the limitations on our ability to see objects arising from the neural embedding procedures will be outlined and in particular, what can be "seen" of the shape of surfaces embedded in more than three dimensions.Speaker: Dr. R. Clement Abstract: Neural responses are often generated by the physical movement of an object or a limb. Each such set of responses corresponds a point on a smooth geometrical surface. To be able to manipulate such a representation the brain assigns coordinates to every point on the surface  a procedure known as embedding. In the first part of this talk the properties of the early visual system are exploited to produce a model of coordinate space based on features such as colour, orientation and movement. The feature model has the advantage over the geometric model that it is not restricted to 2 or 3dimensional pictorial representations. The neural mechanism is highly suited to embedding. In the second part of the talk the feature based coordinate space will be used to explore the neural embedding of the sensory stimuli encountered in binocular vision and in the movement of the eye. In the final part of the talk the limitations on our ability to see objects arising from the neural embedding procedures will be outlined and in particular, what can be "seen" of the shape of surfaces embedded in more than three dimensions.Seminars,Talks,Presentations,Hamilton,Institute

From idea to product: Best practices for improving the impact of product development in large organistations
http://www.hamilton.ie/seminars/videos/16n_pettit_lo.mp4
http://www.hamilton.ie/seminars/videos/16n_pettit_lo.mp4
Thu, 17 Sep 2009 00:00:16 +0100
Speaker:
Dr. N. Pettit
Abstract:
As part of a wider improvement initiative across all parts of our value chain, Danfoss, in 2007, launched an initiative to significantly improve its product development processes. The goal was to make radical improvements on the dimensions of: value to customer, time to profit, unit cost and quality. In order to do this, we looked around to identify industrywide accepted best practices to build on. When starting a similar program in production 4 years earlier, there were clear accepted practices that had proved themselves in multiple companies and industry sectors. These are centred on the manufacturing philosophy of Toyota and generally grouped under the term "lean production". These would often be merged with another set of practices termed "six sigma", that came out of Motorola and championed by GE.
In product development we found a different picture. Although many schools of thought have been adopted by industries, often trying to build on the back of lean production ideas (termed unsurprisingly "Lean product development"), these were found to be relatively immature in their application and narrow in what dimensions they improved when applied. Many proponents backed different tools and methods out of these schools as the "best" best practice, but non appeared to have a track record of significant impact on the multiple dimensions we needed, to justify their claims.
We undertook a significant exercise to look at the internal processes we wanted to improve. We then separated the tools and methods from the different schools of thought to identify which tools and methods were relevant to our processes and had a track record of success along at least one dimension. This led us to identify an underlying empirical set of principles that really seemed to drive true impact along all the dimensions we were looking for. Once we had these, we were able to go back and pick and choose a variety tools and methods from the different schools of thought, that embodied one or more of these principles  stealing with pride. This gave us a set of tools that when used together would create the impact we were looking for. Finally we then created a system to adapt, improve and test these tools and methods before spreading them out, so that our people engaged in product development find them relevant, workable, and able to quickly deliver visible and significant improvement to their product development.
The talk will outline some of these principles and methods we have built up in this journey.
Dr. N. Pettit
no
1:13:25
florian@knorn.org (Hamilton Institute)Speaker: Dr. N. Pettit Abstract: As part of a wider improvement initiative across all parts of our value chain, Danfoss, in 2007, launched an initiative to significantly improve its product development processes. The goal was to make radical improvements on the dimensions of: value to customer, time to profit, unit cost and quality. In order to do this, we looked around to identify industrywide accepted best practices to build on. When starting a similar program in production 4 years earlier, there were clear accepted practices that had proved themselves in multiple companies and industry sectors. These are centred on the manufacturing philosophy of Toyota and generally grouped under the term "lean production". These would often be merged with another set of practices termed "six sigma", that came out of Motorola and championed by GE. In product development we found a different picture. Although many schools of thought have been adopted by industries, often trying to build on the back of lean production ideas (termed unsurprisingly "Lean product development"), these were found to be relatively immature in their application and narrow in what dimensions they improved when applied. Many proponents backed different tools and methods out of these schools as the "best" best practice, but non appeared to have a track record of significant impact on the multiple dimensions we needed, to justify their claims. We undertook a significant exercise to look at the internal processes we wanted to improve. We then separated the tools and methods from the different schools of thought to identify which tools and methods were relevant to our processes and had a track record of success along at least one dimension. This led us to identify an underlying empirical set of principles that really seemed to drive true impact along all the dimensions we were looking for. Once we had these, we were able to go back and pick and choose a variety tools and methods from the different schools of thought, that embodied one or more of these principles  stealing with pride. This gave us a set of tools that when used together would create the impact we were looking for. Finally we then created a system to adapt, improve and test these tools and methods before spreading them out, so that our people engaged in product development find them relevant, workable, and able to quickly deliver visible and significant improvement to their product development. The talk will outline some of these principles and methods we have built up in this journey.Speaker: Dr. N. Pettit Abstract: As part of a wider improvement initiative across all parts of our value chain, Danfoss, in 2007, launched an initiative to significantly improve its product development processes. The goal was to make radical improvements on the dimensions of: value to customer, time to profit, unit cost and quality. In order to do this, we looked around to identify industrywide accepted best practices to build on. When starting a similar program in production 4 years earlier, there were clear accepted practices that had proved themselves in multiple companies and industry sectors. These are centred on the manufacturing philosophy of Toyota and generally grouped under the term "lean production". These would often be merged with another set of practices termed "six sigma", that came out of Motorola and championed by GE. In product development we found a different picture. Although many schools of thought have been adopted by industries, often trying to build on the back of lean production ideas (termed unsurprisingly "Lean product development"), these were found to be relatively immature in their application and narrow in what dimensions they improved when applied. Many proponents backed different tools and methods out of these schools as the "best" best practice, but non appeared to have a track record of significant impact on the multiple dimensions we needed, to justify their claims. We undertook a significant exercise to look at the internal processes we wanted to improve. We then separated the tools and methods from the different schools of thought to identify which tools and methods were relevant to our processes and had a track record of success along at least one dimension. This led us to identify an underlying empirical set of principles that really seemed to drive true impact along all the dimensions we were looking for. Once we had these, we were able to go back and pick and choose a variety tools and methods from the different schools of thought, that embodied one or more of these principles  stealing with pride. This gave us a set of tools that when used together would create the impact we were looking for. Finally we then created a system to adapt, improve and test these tools and methods before spreading them out, so that our people engaged in product development find them relevant, workable, and able to quickly deliver visible and significant improvement to their product development. The talk will outline some of these principles and methods we have built up in this journey.Seminars,Talks,Presentations,Hamilton,Institute

On the Design of DoublyGeneralized LowDensity ParityCheck Code
http://www.hamilton.ie/seminars/videos/15m_flanagan_lo.mp4
http://www.hamilton.ie/seminars/videos/15m_flanagan_lo.mp4
Wed, 26 Aug 2009 00:00:15 +0100
Speaker:
Dr. M. Flanagan
Abstract:
Doublygeneralized lowdensity paritycheck (DGLDPC) codes offer an attractive compromise between algebraic and random code design philosophies. In this talk we introduce the concept of DGLDPC codes,and then provide a solution for the asymptotic growth rate of the weight distribution of any DGLDPC ensemble. This tool is then used for detailed analysis of a case study, namely, a rate1/2 DGLDPC ensemble where all the check nodes are (7,4) Hamming codes and all the variable nodes are length7 single paritycheck codes. It is illustrated how the variable node representations can heavily affect the code properties and how different variable node representations can be combined within the same graph to enhance some of the code parameters. The analysis is conducted over the binary erasure channel. Interesting features of the new codes include the capability of achieving a good compromise between waterfall and error floor performance while preserving graphical regularity, and values of threshold outperforming LDPC counterparts.
Dr. M. Flanagan
no
52:55
florian@knorn.org (Hamilton Institute)Speaker: Dr. M. Flanagan Abstract: Doublygeneralized lowdensity paritycheck (DGLDPC) codes offer an attractive compromise between algebraic and random code design philosophies. In this talk we introduce the concept of DGLDPC codes,and then provide a solution for the asymptotic growth rate of the weight distribution of any DGLDPC ensemble. This tool is then used for detailed analysis of a case study, namely, a rate1/2 DGLDPC ensemble where all the check nodes are (7,4) Hamming codes and all the variable nodes are length7 single paritycheck codes. It is illustrated how the variable node representations can heavily affect the code properties and how different variable node representations can be combined within the same graph to enhance some of the code parameters. The analysis is conducted over the binary erasure channel. Interesting features of the new codes include the capability of achieving a good compromise between waterfall and error floor performance while preserving graphical regularity, and values of threshold outperforming LDPC counterparts.Speaker: Dr. M. Flanagan Abstract: Doublygeneralized lowdensity paritycheck (DGLDPC) codes offer an attractive compromise between algebraic and random code design philosophies. In this talk we introduce the concept of DGLDPC codes,and then provide a solution for the asymptotic growth rate of the weight distribution of any DGLDPC ensemble. This tool is then used for detailed analysis of a case study, namely, a rate1/2 DGLDPC ensemble where all the check nodes are (7,4) Hamming codes and all the variable nodes are length7 single paritycheck codes. It is illustrated how the variable node representations can heavily affect the code properties and how different variable node representations can be combined within the same graph to enhance some of the code parameters. The analysis is conducted over the binary erasure channel. Interesting features of the new codes include the capability of achieving a good compromise between waterfall and error floor performance while preserving graphical regularity, and values of threshold outperforming LDPC counterparts.Seminars,Talks,Presentations,Hamilton,Institute

Asymptotic Properties of Volterra Equations
http://www.hamilton.ie/seminars/videos/14e_velasco_lo.mp4
http://www.hamilton.ie/seminars/videos/14e_velasco_lo.mp4
Mon, 17 Aug 2009 00:00:14 +0100
Speaker:
Prof. E.C. Velasco
Abstract:
Volterra integral and difference equations may be used to model the dynamics of physical systems (viscoelasticity, motion of bodies with reference to hereditary) and biological systems (populations dynamics, biomechanics). In this talk we discuss about asymptotic properties of solutions of both, Volterra integral and Volterra difference equations. For the Volterra difference equations, we derive stability conditions based on the direct Lyapunov method and present some examples to illustrate them.
Prof. E.C. Velasco
no
53:57
florian@knorn.org (Hamilton Institute)Speaker: Prof. E.C. Velasco Abstract: Volterra integral and difference equations may be used to model the dynamics of physical systems (viscoelasticity, motion of bodies with reference to hereditary) and biological systems (populations dynamics, biomechanics). In this talk we discuss about asymptotic properties of solutions of both, Volterra integral and Volterra difference equations. For the Volterra difference equations, we derive stability conditions based on the direct Lyapunov method and present some examples to illustrate them.Speaker: Prof. E.C. Velasco Abstract: Volterra integral and difference equations may be used to model the dynamics of physical systems (viscoelasticity, motion of bodies with reference to hereditary) and biological systems (populations dynamics, biomechanics). In this talk we discuss about asymptotic properties of solutions of both, Volterra integral and Volterra difference equations. For the Volterra difference equations, we derive stability conditions based on the direct Lyapunov method and present some examples to illustrate them.Seminars,Talks,Presentations,Hamilton,Institute

On Fair Coexistence of Wireless Networks via CSMA Based Transmission Algorithms
http://www.hamilton.ie/seminars/videos/13m_alanyali_lo.mp4
http://www.hamilton.ie/seminars/videos/13m_alanyali_lo.mp4
Thu, 25 Jun 2009 00:00:13 +0100
Speaker:
Prof. M. Alanyali
Abstract:
This talk will touch on wireless coexistence issues that arise due to higher spatial density of spectrum usage. We consider a fairness perspective for autonomous scheduling of transmissions by distinct sessions, subject to constraints that are represented by a conflict graph. The emphasis is on randomized backoffbased CSMA algorithms. The resulting transmission dynamics is represented by a Markovian model whose analysis suggests practical challenges in fair sharing of spectrum by distinct sessions that subscribe to a common standard, as well as by those that do not possess a common signaling protocol.
Prof. M. Alanyali
no
1:06:36
florian@knorn.org (Hamilton Institute)Speaker: Prof. M. Alanyali Abstract: This talk will touch on wireless coexistence issues that arise due to higher spatial density of spectrum usage. We consider a fairness perspective for autonomous scheduling of transmissions by distinct sessions, subject to constraints that are represented by a conflict graph. The emphasis is on randomized backoffbased CSMA algorithms. The resulting transmission dynamics is represented by a Markovian model whose analysis suggests practical challenges in fair sharing of spectrum by distinct sessions that subscribe to a common standard, as well as by those that do not possess a common signaling protocol.Speaker: Prof. M. Alanyali Abstract: This talk will touch on wireless coexistence issues that arise due to higher spatial density of spectrum usage. We consider a fairness perspective for autonomous scheduling of transmissions by distinct sessions, subject to constraints that are represented by a conflict graph. The emphasis is on randomized backoffbased CSMA algorithms. The resulting transmission dynamics is represented by a Markovian model whose analysis suggests practical challenges in fair sharing of spectrum by distinct sessions that subscribe to a common standard, as well as by those that do not possess a common signaling protocol.Seminars,Talks,Presentations,Hamilton,Institute

How to understand the cell by breaking it — computational inference of cellular networks from gene perturbation screens
http://www.hamilton.ie/seminars/videos/11f_markowetz_lo.mp4
http://www.hamilton.ie/seminars/videos/11f_markowetz_lo.mp4
Thu, 11 Jun 2009 00:00:11 +0100
Speaker:
Dr. F. Markowetz
Abstract:
Cellular mechanisms are driven by interactions between proteins, DNA and RNA, working together in cellular pathways. Current knowledge of information flow in the cell is still very incomplete and dissection of cellular pathways is one of the major challenges of systems biology. Computational approaches integrating heterogeneous genomic data sources into one joint model promise a comprehensive view on cellular processes. However, to be successful, computational methods need to account for the specific features of each data source.
In this talk I will focus on data from gene perturbation experiments, where individual pathway members are experimentally silenced and effects of these perturbations are measured in genomic assays. I will describe Nested Effects Models, a probabilistic graphical model especially designed to reconstruct signaling pathways from gene perturbation data.
Dr. F. Markowetz
no
48:31
florian@knorn.org (Hamilton Institute)Speaker: Dr. F. Markowetz Abstract: Cellular mechanisms are driven by interactions between proteins, DNA and RNA, working together in cellular pathways. Current knowledge of information flow in the cell is still very incomplete and dissection of cellular pathways is one of the major challenges of systems biology. Computational approaches integrating heterogeneous genomic data sources into one joint model promise a comprehensive view on cellular processes. However, to be successful, computational methods need to account for the specific features of each data source. In this talk I will focus on data from gene perturbation experiments, where individual pathway members are experimentally silenced and effects of these perturbations are measured in genomic assays. I will describe Nested Effects Models, a probabilistic graphical model especially designed to reconstruct signaling pathways from gene perturbation data.Speaker: Dr. F. Markowetz Abstract: Cellular mechanisms are driven by interactions between proteins, DNA and RNA, working together in cellular pathways. Current knowledge of information flow in the cell is still very incomplete and dissection of cellular pathways is one of the major challenges of systems biology. Computational approaches integrating heterogeneous genomic data sources into one joint model promise a comprehensive view on cellular processes. However, to be successful, computational methods need to account for the specific features of each data source. In this talk I will focus on data from gene perturbation experiments, where individual pathway members are experimentally silenced and effects of these perturbations are measured in genomic assays. I will describe Nested Effects Models, a probabilistic graphical model especially designed to reconstruct signaling pathways from gene perturbation data.Seminars,Talks,Presentations,Hamilton,Institute

Multivariate Time Series Analysis in Neurology
http://www.hamilton.ie/seminars/videos/10b_schelter_lo.mp4
http://www.hamilton.ie/seminars/videos/10b_schelter_lo.mp4
Wed, 06 May 2009 00:00:10 +0100
Speaker:
Dr. Björn Schelter
Abstract:
Nowadays, data are recorded with increasing spatio as well as temporal resolution. This calls for new methods to analyze these data sets. Caused by the high spatio as well as temporal resolution of the recorded signals, inference of the causal network structure underlying them becomes feasible. In many applications a detailed analysis of these networks allows deeper insights into the normal functioning or malfunctioning of the system. In Neurology this helps to understand certain diseases like epilepsy or Parkinsons disease.
Novel concepts to analyze multivariate data consisting of both time series as well as point processes will be presented. By means of an application to tremor in Parkinsons disease, the abilities and limitations of these techniques are discussed.
Dr. Björn Schelter
no
55:25
florian@knorn.org (Hamilton Institute)Speaker: Dr. Björn Schelter Abstract: Nowadays, data are recorded with increasing spatio as well as temporal resolution. This calls for new methods to analyze these data sets. Caused by the high spatio as well as temporal resolution of the recorded signals, inference of the causal network structure underlying them becomes feasible. In many applications a detailed analysis of these networks allows deeper insights into the normal functioning or malfunctioning of the system. In Neurology this helps to understand certain diseases like epilepsy or Parkinsons disease. Novel concepts to analyze multivariate data consisting of both time series as well as point processes will be presented. By means of an application to tremor in Parkinsons disease, the abilities and limitations of these techniques are discussed.Speaker: Dr. Björn Schelter Abstract: Nowadays, data are recorded with increasing spatio as well as temporal resolution. This calls for new methods to analyze these data sets. Caused by the high spatio as well as temporal resolution of the recorded signals, inference of the causal network structure underlying them becomes feasible. In many applications a detailed analysis of these networks allows deeper insights into the normal functioning or malfunctioning of the system. In Neurology this helps to understand certain diseases like epilepsy or Parkinsons disease. Novel concepts to analyze multivariate data consisting of both time series as well as point processes will be presented. By means of an application to tremor in Parkinsons disease, the abilities and limitations of these techniques are discussed.Seminars,Talks,Presentations,Hamilton,Institute

Probabilistic Interaction Networks
http://www.hamilton.ie/seminars/videos/09r_kulhavy_lo.mp4
http://www.hamilton.ie/seminars/videos/09r_kulhavy_lo.mp4
Wed, 29 Apr 2009 00:00:09 +0100
Speaker:
Dr. Rudolf Kulhavý
Abstract:
There is a common perception in todays business that the world around us becomes less hierarchical and more networked and flat. While the shift towards a networked and decentralised business environment generally creates more freedom to act, it does not increase automatically the chances of success. Understanding the dynamics of networked systems — in particular the interplay between the performance of an individual node and of the entire network, and the importance of effective bonding for the wellbeing of an organisation — becomes a critical skill. Replacing mental models with a formal, quantitative model can improve such understanding and ultimately allow for systematic network optimisation. To this end, we propose to combine stochastic system dynamics modelling of individual nodes with probabilistic graphical modelling of a network configuration. The latter is closely related to theoretical constructs such as the Ising model in statistical mechanics or Markov random fields in image analysis. Modelling of value networks in business turns out to be even more complex because of the random structure of a network. In this talk, we discuss the economic substance and mathematical representation of nodetonode bonds, formulate a general Bayesian solution to the problem of estimating unknown state and parameter values in the resulting model, and discuss its Markov chain Monte Carlo implementation. To illustrate the concepts introduced, we revisit Clayton Christensens qualitative model of the dynamic behaviour of new entrants versus incumbents when dealing with sustaining and disruptive innovation — and consider its reformulation as a probabilistic interaction network. We conclude by looking outside business for other instances of value networks.
Dr. Rudolf Kulhavý
no
1:04:01
florian@knorn.org (Hamilton Institute)Speaker: Dr. Rudolf Kulhavý Abstract: There is a common perception in todays business that the world around us becomes less hierarchical and more networked and flat. While the shift towards a networked and decentralised business environment generally creates more freedom to act, it does not increase automatically the chances of success. Understanding the dynamics of networked systems — in particular the interplay between the performance of an individual node and of the entire network, and the importance of effective bonding for the wellbeing of an organisation — becomes a critical skill. Replacing mental models with a formal, quantitative model can improve such understanding and ultimately allow for systematic network optimisation. To this end, we propose to combine stochastic system dynamics modelling of individual nodes with probabilistic graphical modelling of a network configuration. The latter is closely related to theoretical constructs such as the Ising model in statistical mechanics or Markov random fields in image analysis. Modelling of value networks in business turns out to be even more complex because of the random structure of a network. In this talk, we discuss the economic substance and mathematical representation of nodetonode bonds, formulate a general Bayesian solution to the problem of estimating unknown state and parameter values in the resulting model, and discuss its Markov chain Monte Carlo implementation. To illustrate the concepts introduced, we revisit Clayton Christensens qualitative model of the dynamic behaviour of new entrants versus incumbents when dealing with sustaining and disruptive innovation — and consider its reformulation as a probabilistic interaction network. We conclude by looking outside business for other instances of value networks.Speaker: Dr. Rudolf Kulhavý Abstract: There is a common perception in todays business that the world around us becomes less hierarchical and more networked and flat. While the shift towards a networked and decentralised business environment generally creates more freedom to act, it does not increase automatically the chances of success. Understanding the dynamics of networked systems — in particular the interplay between the performance of an individual node and of the entire network, and the importance of effective bonding for the wellbeing of an organisation — becomes a critical skill. Replacing mental models with a formal, quantitative model can improve such understanding and ultimately allow for systematic network optimisation. To this end, we propose to combine stochastic system dynamics modelling of individual nodes with probabilistic graphical modelling of a network configuration. The latter is closely related to theoretical constructs such as the Ising model in statistical mechanics or Markov random fields in image analysis. Modelling of value networks in business turns out to be even more complex because of the random structure of a network. In this talk, we discuss the economic substance and mathematical representation of nodetonode bonds, formulate a general Bayesian solution to the problem of estimating unknown state and parameter values in the resulting model, and discuss its Markov chain Monte Carlo implementation. To illustrate the concepts introduced, we revisit Clayton Christensens qualitative model of the dynamic behaviour of new entrants versus incumbents when dealing with sustaining and disruptive innovation — and consider its reformulation as a probabilistic interaction network. We conclude by looking outside business for other instances of value networks.Seminars,Talks,Presentations,Hamilton,Institute

Counting & Sampling Contingency Tables
http://www.hamilton.ie/seminars/videos/08m_cryan_lo.mp4
http://www.hamilton.ie/seminars/videos/08m_cryan_lo.mp4
Wed, 22 Apr 2009 00:00:08 +0100
Speaker:
Dr. M. Cryan
Abstract:
Suppose we are given two lists r and c of positive integers, where r=(r[1],...., r[m]) represents a list of prescribed row sums and c=(c[1], ..., c[n]) is a list of prescribed column sums. We require that (r[1] + ... + r[m]) =(c[1] + ... + c[n]). In this setting, we say that a mbyn matrix X of nonnegative integers is a Contingency Table (for these given row/column values) if X simultaneously satisfies all of the given row and column sums. The problem of determining whether at least one contingency table exists can be solved in polynomialtime (in fact, this question is fairly trivial).
In my talk, we are interested in the moredifficult problem of randomly sampling a table uniformly at random, from the entire set of contingency tables. This problem has some applications in practical statistics which I will mention. We study a very natural Markov chain on the set of contingency tables called the 2by2 heat bath: one step of this chain operates by selecting 2 rows and 2 columns uniformly at random, computing the induced row sums and column sums on this 2by2 window, then replacing the window with a table chosen randomly from all 2by2 tables with the induced row and column sums. This Markov chain converges to the uniform distribution on contingency tables  our goal is to show that it approaches uniformity within polynomialtime. We are able to achieve this result for the case when the number of rows m is some fixed constant. Our proof is by application of the canonical paths method of Jerrum and Sinclair.
(Joint work with Martin Dyer, Leslie Goldberg, Mark Jerrum and Russell Martin)
Dr. M. Cryan
no
1:01:27
florian@knorn.org (Hamilton Institute)Speaker: Dr. M. Cryan Abstract: Suppose we are given two lists r and c of positive integers, where r=(r[1],...., r[m]) represents a list of prescribed row sums and c=(c[1], ..., c[n]) is a list of prescribed column sums. We require that (r[1] + ... + r[m]) =(c[1] + ... + c[n]). In this setting, we say that a mbyn matrix X of nonnegative integers is a Contingency Table (for these given row/column values) if X simultaneously satisfies all of the given row and column sums. The problem of determining whether at least one contingency table exists can be solved in polynomialtime (in fact, this question is fairly trivial). In my talk, we are interested in the moredifficult problem of randomly sampling a table uniformly at random, from the entire set of contingency tables. This problem has some applications in practical statistics which I will mention. We study a very natural Markov chain on the set of contingency tables called the 2by2 heat bath: one step of this chain operates by selecting 2 rows and 2 columns uniformly at random, computing the induced row sums and column sums on this 2by2 window, then replacing the window with a table chosen randomly from all 2by2 tables with the induced row and column sums. This Markov chain converges to the uniform distribution on contingency tables  our goal is to show that it approaches uniformity within polynomialtime. We are able to achieve this result for the case when the number of rows m is some fixed constant. Our proof is by application of the canonical paths method of Jerrum and Sinclair. (Joint work with Martin Dyer, Leslie Goldberg, Mark Jerrum and Russell Martin)Speaker: Dr. M. Cryan Abstract: Suppose we are given two lists r and c of positive integers, where r=(r[1],...., r[m]) represents a list of prescribed row sums and c=(c[1], ..., c[n]) is a list of prescribed column sums. We require that (r[1] + ... + r[m]) =(c[1] + ... + c[n]). In this setting, we say that a mbyn matrix X of nonnegative integers is a Contingency Table (for these given row/column values) if X simultaneously satisfies all of the given row and column sums. The problem of determining whether at least one contingency table exists can be solved in polynomialtime (in fact, this question is fairly trivial). In my talk, we are interested in the moredifficult problem of randomly sampling a table uniformly at random, from the entire set of contingency tables. This problem has some applications in practical statistics which I will mention. We study a very natural Markov chain on the set of contingency tables called the 2by2 heat bath: one step of this chain operates by selecting 2 rows and 2 columns uniformly at random, computing the induced row sums and column sums on this 2by2 window, then replacing the window with a table chosen randomly from all 2by2 tables with the induced row and column sums. This Markov chain converges to the uniform distribution on contingency tables  our goal is to show that it approaches uniformity within polynomialtime. We are able to achieve this result for the case when the number of rows m is some fixed constant. Our proof is by application of the canonical paths method of Jerrum and Sinclair. (Joint work with Martin Dyer, Leslie Goldberg, Mark Jerrum and Russell Martin)Seminars,Talks,Presentations,Hamilton,Institute

ClubADSL: Enhancing Bandwidth Aggregation in your Neighborhood
http://www.hamilton.ie/seminars/videos/07d_giustiniano_lo.mp4
http://www.hamilton.ie/seminars/videos/07d_giustiniano_lo.mp4
Fri, 20 Feb 2009 00:00:07 +0000
Speaker:
Dr. D. Giustiniano
Abstract:
ADSL is becoming the standard form of residential and smallbusiness broadband access to the Internet due, primarily, to its low deployment cost. These ADSL residential lines are often deployed with Access Points (AP) that provide wireless connectivity. While the ADSL technology has showed evident limits in terms of capacity, the shortrange wireless communication can guarantee a similar or higher capacity. Even more important, it is often possible for a residential wireless client to be in range of several other APs belonging to nearby neighbors with ADSL connections. Therefore, it is possible for a wireless client to simultaneously connect to several APs in range and effectively aggregate their available ADSL bandwidth. Recent works have shown promising results within this area, but main important questions are still unresolved:i) how can we guarantee a fair distributed bandwidth allocation among clients? ii) how the latency of TCP connection can be affected by AP connections over multiple frequencies? iii) how can we minimize the MAC cost of managing these multiple APs? In order to answer to these questions, we introduce ClubADSL, a prototype wireless client that can aggregate the capacity of multifrequency APs. ClubADSL achieves fairness through distributed pressure schemes and minimizes the impact of endtoend latency on the system performance with a resource allocation scheme based on AccessPoint slot assignment. We show the feasibility of such a system in seamlessly transmitting TCP traffic, and validate its experimental implementation over commodity hardware in controlled scenarios. [Joint Work with Alberto Lopez, Eduard Goma, Julian Morillo,Pablo Rodriguez].
Dr. D. Giustiniano
no
59:00
florian@knorn.org (Hamilton Institute)Speaker: Dr. D. Giustiniano Abstract: ADSL is becoming the standard form of residential and smallbusiness broadband access to the Internet due, primarily, to its low deployment cost. These ADSL residential lines are often deployed with Access Points (AP) that provide wireless connectivity. While the ADSL technology has showed evident limits in terms of capacity, the shortrange wireless communication can guarantee a similar or higher capacity. Even more important, it is often possible for a residential wireless client to be in range of several other APs belonging to nearby neighbors with ADSL connections. Therefore, it is possible for a wireless client to simultaneously connect to several APs in range and effectively aggregate their available ADSL bandwidth. Recent works have shown promising results within this area, but main important questions are still unresolved:i) how can we guarantee a fair distributed bandwidth allocation among clients? ii) how the latency of TCP connection can be affected by AP connections over multiple frequencies? iii) how can we minimize the MAC cost of managing these multiple APs? In order to answer to these questions, we introduce ClubADSL, a prototype wireless client that can aggregate the capacity of multifrequency APs. ClubADSL achieves fairness through distributed pressure schemes and minimizes the impact of endtoend latency on the system performance with a resource allocation scheme based on AccessPoint slot assignment. We show the feasibility of such a system in seamlessly transmitting TCP traffic, and validate its experimental implementation over commodity hardware in controlled scenarios. [Joint Work with Alberto Lopez, Eduard Goma, Julian Morillo,Pablo Rodriguez].Speaker: Dr. D. Giustiniano Abstract: ADSL is becoming the standard form of residential and smallbusiness broadband access to the Internet due, primarily, to its low deployment cost. These ADSL residential lines are often deployed with Access Points (AP) that provide wireless connectivity. While the ADSL technology has showed evident limits in terms of capacity, the shortrange wireless communication can guarantee a similar or higher capacity. Even more important, it is often possible for a residential wireless client to be in range of several other APs belonging to nearby neighbors with ADSL connections. Therefore, it is possible for a wireless client to simultaneously connect to several APs in range and effectively aggregate their available ADSL bandwidth. Recent works have shown promising results within this area, but main important questions are still unresolved:i) how can we guarantee a fair distributed bandwidth allocation among clients? ii) how the latency of TCP connection can be affected by AP connections over multiple frequencies? iii) how can we minimize the MAC cost of managing these multiple APs? In order to answer to these questions, we introduce ClubADSL, a prototype wireless client that can aggregate the capacity of multifrequency APs. ClubADSL achieves fairness through distributed pressure schemes and minimizes the impact of endtoend latency on the system performance with a resource allocation scheme based on AccessPoint slot assignment. We show the feasibility of such a system in seamlessly transmitting TCP traffic, and validate its experimental implementation over commodity hardware in controlled scenarios. [Joint Work with Alberto Lopez, Eduard Goma, Julian Morillo,Pablo Rodriguez].Seminars,Talks,Presentations,Hamilton,Institute

How I broke AES (Advanced Encryption Standard) — if I did it
http://www.hamilton.ie/seminars/videos/06w_smith_lo.mp4
http://www.hamilton.ie/seminars/videos/06w_smith_lo.mp4
Mon, 02 Feb 2009 00:00:06 +0000
Speaker:
Dr. W. D. Smith
Abstract:
We describe a new simple but more powerful form of linear cryptanalysis. It appears to break AES (and undoubtedly other cryptosystems too, e.g. SKIPJACK).
*But the break is "nonconstructive".
*Even if this break is broken (due to the underlying models inadequately approximating the real world) we explain how AES still could contain "trapdoors" which would make cryptanalysis unexpectedly easy for anybody who knew the trapdoor.
We then discuss how to use the theory of BLECCs to build cryptosystems provably
*not containing trapdoors of this sort,
*secure against our strengthened form of linear cryptanalysis,
*secure against "differential" cryptanalysis,
*secure against D.J. Bernstein's timing attack.
Using this technique we prove a fundamental theorem: it is possible to thus encrypt N bits with security 2^(cN), via an circuit Q_N containing <= cN twoinput logic gates and operating in <= c log(N) gatedelays, where Q_N is constructible in polynomial (in N) time.
Dr. W. D. Smith
no
1:04:31
florian@knorn.org (Hamilton Institute)Speaker: Dr. W. D. Smith Abstract: We describe a new simple but more powerful form of linear cryptanalysis. It appears to break AES (and undoubtedly other cryptosystems too, e.g. SKIPJACK). *But the break is "nonconstructive". *Even if this break is broken (due to the underlying models inadequately approximating the real world) we explain how AES still could contain "trapdoors" which would make cryptanalysis unexpectedly easy for anybody who knew the trapdoor. We then discuss how to use the theory of BLECCs to build cryptosystems provably *not containing trapdoors of this sort, *secure against our strengthened form of linear cryptanalysis, *secure against "differential" cryptanalysis, *secure against D.J. Bernstein's timing attack. Using this technique we prove a fundamental theorem: it is possible to thus encrypt N bits with security 2^(cN), via an circuit Q_N containing <= cN twoinput logic gates and operating in <= c log(N) gatedelays, where Q_N is constructible in polynomial (in N) time.Speaker: Dr. W. D. Smith Abstract: We describe a new simple but more powerful form of linear cryptanalysis. It appears to break AES (and undoubtedly other cryptosystems too, e.g. SKIPJACK). *But the break is "nonconstructive". *Even if this break is broken (due to the underlying models inadequately approximating the real world) we explain how AES still could contain "trapdoors" which would make cryptanalysis unexpectedly easy for anybody who knew the trapdoor. We then discuss how to use the theory of BLECCs to build cryptosystems provably *not containing trapdoors of this sort, *secure against our strengthened form of linear cryptanalysis, *secure against "differential" cryptanalysis, *secure against D.J. Bernstein's timing attack. Using this technique we prove a fundamental theorem: it is possible to thus encrypt N bits with security 2^(cN), via an circuit Q_N containing <= cN twoinput logic gates and operating in <= c log(N) gatedelays, where Q_N is constructible in polynomial (in N) time.Seminars,Talks,Presentations,Hamilton,Institute

Router Buffer Sizing Revisited: The Role of the Output/Input Capacity Ratio
http://www.hamilton.ie/seminars/videos/05c_dovrolis_lo.mp4
http://www.hamilton.ie/seminars/videos/05c_dovrolis_lo.mp4
Mon, 13 Oct 2008 00:00:05 +0100
Speaker:
Prof. C. Dovrolis
Abstract:
The issue of router buffer sizing is still open and significant. Previous work either considers openloop traffic or only analyzes persistent TCP flows. Our work differs in two ways. First, it considers the more realistic case of nonpersistent TCP flows with heavytailed size distribution. Second, instead of only looking at link metrics, we focus on the impact of buffer sizing on TCP performance. Through a combination of test bed experiments, simulation, and analysis, we reach the following conclusions: The output/input capacity ratio at a network link largely determines the drops exponentially with the buffer size and the optimal buffer size is close to zero. Otherwise, if the output/input capacity ratio is lower than one, the loss rate follows a powerlaw reduction with the buffer size and significant buffering is needed, especially with flows that are mostly in congestionavoidance. Smaller transfers, which are mostly in slowstart, require significantly smaller buffers. We conclude by revisiting the ongoing debate on "small versus large" buffers from a new perspective.
Prof. C. Dovrolis
no
56:33
florian@knorn.org (Hamilton Institute)Speaker: Prof. C. Dovrolis Abstract: The issue of router buffer sizing is still open and significant. Previous work either considers openloop traffic or only analyzes persistent TCP flows. Our work differs in two ways. First, it considers the more realistic case of nonpersistent TCP flows with heavytailed size distribution. Second, instead of only looking at link metrics, we focus on the impact of buffer sizing on TCP performance. Through a combination of test bed experiments, simulation, and analysis, we reach the following conclusions: The output/input capacity ratio at a network link largely determines the drops exponentially with the buffer size and the optimal buffer size is close to zero. Otherwise, if the output/input capacity ratio is lower than one, the loss rate follows a powerlaw reduction with the buffer size and significant buffering is needed, especially with flows that are mostly in congestionavoidance. Smaller transfers, which are mostly in slowstart, require significantly smaller buffers. We conclude by revisiting the ongoing debate on "small versus large" buffers from a new perspective.Speaker: Prof. C. Dovrolis Abstract: The issue of router buffer sizing is still open and significant. Previous work either considers openloop traffic or only analyzes persistent TCP flows. Our work differs in two ways. First, it considers the more realistic case of nonpersistent TCP flows with heavytailed size distribution. Second, instead of only looking at link metrics, we focus on the impact of buffer sizing on TCP performance. Through a combination of test bed experiments, simulation, and analysis, we reach the following conclusions: The output/input capacity ratio at a network link largely determines the drops exponentially with the buffer size and the optimal buffer size is close to zero. Otherwise, if the output/input capacity ratio is lower than one, the loss rate follows a powerlaw reduction with the buffer size and significant buffering is needed, especially with flows that are mostly in congestionavoidance. Smaller transfers, which are mostly in slowstart, require significantly smaller buffers. We conclude by revisiting the ongoing debate on "small versus large" buffers from a new perspective.Seminars,Talks,Presentations,Hamilton,Institute

Patchy Solutions of HamiltonJacobiBellman Equations
http://www.hamilton.ie/seminars/videos/03a_krener_lo.mp4
http://www.hamilton.ie/seminars/videos/03a_krener_lo.mp4
Fri, 23 May 2008 00:00:03 +0100
Speaker:
Prof. A. E. Krener
Abstract:
The Hamilton Jacobi Bellman partial differential equation arises in the solution of optimal control problems. It is a first order, nonlinear, hyperbolic PDE that is very difficult to solve because of the curse of dimensionality. Moreover the solution may not exist in the classical sense, i.e., the solution may not be differentiable everywhere. We describe an approach to approximately solve some of these equations on patches where the solution is smooth.
Prof. A. E. Krener
no
56:26
florian@knorn.org (Hamilton Institute)Speaker: Prof. A. E. Krener Abstract: The Hamilton Jacobi Bellman partial differential equation arises in the solution of optimal control problems. It is a first order, nonlinear, hyperbolic PDE that is very difficult to solve because of the curse of dimensionality. Moreover the solution may not exist in the classical sense, i.e., the solution may not be differentiable everywhere. We describe an approach to approximately solve some of these equations on patches where the solution is smooth.Speaker: Prof. A. E. Krener Abstract: The Hamilton Jacobi Bellman partial differential equation arises in the solution of optimal control problems. It is a first order, nonlinear, hyperbolic PDE that is very difficult to solve because of the curse of dimensionality. Moreover the solution may not exist in the classical sense, i.e., the solution may not be differentiable everywhere. We describe an approach to approximately solve some of these equations on patches where the solution is smooth.Seminars,Talks,Presentations,Hamilton,Institute

PassivityBased Stability Analysis and Applications to Biochemical Reaction Networks
http://www.hamilton.ie/seminars/videos/02m_arcak_lo.mp4
http://www.hamilton.ie/seminars/videos/02m_arcak_lo.mp4
Mon, 19 May 2008 00:00:02 +0100
Speaker:
Prof. M. Arcak
Abstract:
The passivity concept  an abstraction of energy conservation and dissipation in physical systems  has been instrumental in feedback control theory and led to breakthroughs in nonlinear and adaptive control design. In this talk we discuss the use of passivity as a stability test for classes of biochemical reaction networks. The main result determines global asymptotic stability of the network from the diagonal stability of a dissipativity matrix which incorporates information about the passivity properties of the subsystems, the interconnection structure of the network, and the signs of the feedback terms. This stability test encompasses the wellknown 'secant criterion' for cyclic networks and extends it to general interconnection structures represented by graphs. An extension to reactiondiffusion PDEs is also discussed. The results are illustrated on MAPK cascade models and on branched interconnection structures motivated by metabolic networks.
Prof. M. Arcak
no
48:53
florian@knorn.org (Hamilton Institute)Speaker: Prof. M. Arcak Abstract: The passivity concept  an abstraction of energy conservation and dissipation in physical systems  has been instrumental in feedback control theory and led to breakthroughs in nonlinear and adaptive control design. In this talk we discuss the use of passivity as a stability test for classes of biochemical reaction networks. The main result determines global asymptotic stability of the network from the diagonal stability of a dissipativity matrix which incorporates information about the passivity properties of the subsystems, the interconnection structure of the network, and the signs of the feedback terms. This stability test encompasses the wellknown 'secant criterion' for cyclic networks and extends it to general interconnection structures represented by graphs. An extension to reactiondiffusion PDEs is also discussed. The results are illustrated on MAPK cascade models and on branched interconnection structures motivated by metabolic networks.Speaker: Prof. M. Arcak Abstract: The passivity concept  an abstraction of energy conservation and dissipation in physical systems  has been instrumental in feedback control theory and led to breakthroughs in nonlinear and adaptive control design. In this talk we discuss the use of passivity as a stability test for classes of biochemical reaction networks. The main result determines global asymptotic stability of the network from the diagonal stability of a dissipativity matrix which incorporates information about the passivity properties of the subsystems, the interconnection structure of the network, and the signs of the feedback terms. This stability test encompasses the wellknown 'secant criterion' for cyclic networks and extends it to general interconnection structures represented by graphs. An extension to reactiondiffusion PDEs is also discussed. The results are illustrated on MAPK cascade models and on branched interconnection structures motivated by metabolic networks.Seminars,Talks,Presentations,Hamilton,Institute

InputtoState Stability of Differential Inclusions with Application to Hysteretic Feedback Systems
http://www.hamilton.ie/seminars/videos/01e_p_ryan_lo.mp4
http://www.hamilton.ie/seminars/videos/01e_p_ryan_lo.mp4
Thu, 15 May 2008 00:00:01 +0100
Speaker:
Prof. E. P. Ryan
Abstract:
Inputto state stability is a concept that captures "nice" properties of dynamical systems with input (e.g.bounded input implies bounded state, input "eventually small" implies state "eventually small", input convergent to zero implies state convergent to zero). Inputtostate stability (ISS) of a class of differential inclusions is described. Every system in the class is of Lur'etype: a feedback interconnection of a linear system and a (setvalued) nonlinearity. Applications of the ISS results, in the context of feedback interconnections with a hysteresis operator in the feedback path, are developed.
Prof. E. P. Ryan
no
1:02:04
florian@knorn.org (Hamilton Institute)Speaker: Prof. E. P. Ryan Abstract: Inputto state stability is a concept that captures "nice" properties of dynamical systems with input (e.g.bounded input implies bounded state, input "eventually small" implies state "eventually small", input convergent to zero implies state convergent to zero). Inputtostate stability (ISS) of a class of differential inclusions is described. Every system in the class is of Lur'etype: a feedback interconnection of a linear system and a (setvalued) nonlinearity. Applications of the ISS results, in the context of feedback interconnections with a hysteresis operator in the feedback path, are developed.Speaker: Prof. E. P. Ryan Abstract: Inputto state stability is a concept that captures "nice" properties of dynamical systems with input (e.g.bounded input implies bounded state, input "eventually small" implies state "eventually small", input convergent to zero implies state convergent to zero). Inputtostate stability (ISS) of a class of differential inclusions is described. Every system in the class is of Lur'etype: a feedback interconnection of a linear system and a (setvalued) nonlinearity. Applications of the ISS results, in the context of feedback interconnections with a hysteresis operator in the feedback path, are developed.Seminars,Talks,Presentations,Hamilton,Institute