<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0">
	<channel>
		<title>Hamilton Institute Seminars (HD / large)</title>
		<itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords>
		<itunes:subtitle>Public seminars held at the Hamilton Institute, NUI Maynooth, Ireland</itunes:subtitle>
		<itunes:author>Hamilton Institute</itunes:author>
		<itunes:summary>The Hamilton Institute is a multi-disciplinary research centre established at the National University of Ireland, Maynooth in November 2001.  The Institute seeks to provide a bridge between mathematics and its applications in ICT and biology.&#13;
&#13;
In this podcast feed, we make accessible some of the best seminars held by members of the Hamilton Institute, visitors or guest speakers.&#13;
&#13;
Futhermore, it will also contain the lectures give as part of the 'Network Mathematics Graduate Programme'.</itunes:summary>
		
		
		<link>http://www.hamilton.ie/seminars.htm</link>
		<description>The Hamilton Institute is a multi-disciplinary research centre established at the National University of Ireland, Maynooth in November 2001.  The Institute seeks to provide a bridge between mathematics and its applications in ICT and biology.

In this podcast feed, we make accessible some of the best seminars held by members of the Hamilton Institute, visitors or guest speakers.

Futhermore, it will also contain the lectures give as part of the 'Network Mathematics Graduate Programme'.</description>
		<lastBuildDate>Mon, 06 Apr 2026 21:52:33 +0100</lastBuildDate>
		<language>en-GB</language>
		<copyright>© 2008-2011 - All rights reserved.</copyright>
		
		<itunes:new-feed-url>http://feeds2.feedburner.com/Hamilton-Institute-Seminars-HD</itunes:new-feed-url>
		<itunes:explicit>no</itunes:explicit>
		<managingEditor>florian@knorn.org (Florian Knorn)</managingEditor>
		
		<itunes:image href="http://www.hamilton.ie/seminars/videos/itunes_logo.jpg"/>
		<generator>dirCast v0.7, modified by Florian Knorn</generator>
		<webMaster>florian@knorn.org (Florian Knorn)</webMaster>
		<ttl>60</ttl>


<itunes:category text="Education"><itunes:category text="Higher Education"/></itunes:category><itunes:category text="Science &amp; Medicine"><itunes:category text="Natural Sciences"/></itunes:category><itunes:owner><itunes:email>florian@knorn.org</itunes:email><itunes:name>Hamilton Institute</itunes:name></itunes:owner><item>
	<title>Periodicity of Matrix Powers in Max Algebra</title>
	<link>http://www.hamilton.ie/seminars/videos/66-s_sergeev_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/66-s_sergeev_hi.mp4</guid>
	<pubDate>Wed, 07 Aug 2013 00:01:06 +0100</pubDate>
	<description>Speaker:

Dr. S. Sergeev


Abstract:

It is well known that the sequence of max-algebraic powers of irreducible nonnegative matrices is ultimately periodic. We express this periodicity in terms of CSR-representations and give new bounds on the transient time after which the max-algebraic powers become periodic.</description>
	<itunes:author>Dr. S. Sergeev</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>55:20</itunes:duration>
	<enclosure length="815711411" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/66-s_sergeev_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. S. Sergeev Abstract: It is well known that the sequence of max-algebraic powers of irreducible nonnegative matrices is ultimately periodic. We express this periodicity in terms of CSR-representations and give new bounds on the transient time after which the max-algebraic powers become periodic.</itunes:subtitle><itunes:summary>Speaker: Dr. S. Sergeev Abstract: It is well known that the sequence of max-algebraic powers of irreducible nonnegative matrices is ultimately periodic. We express this periodicity in terms of CSR-representations and give new bounds on the transient time after which the max-algebraic powers become periodic.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Very High Speed Networking in VMs and Bare Metal</title>
	<link>http://www.hamilton.ie/seminars/videos/65-l_rizzo_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/65-l_rizzo_hi.mp4</guid>
	<pubDate>Fri, 05 Jul 2013 00:01:05 +0100</pubDate>
	<description>Speaker:

Prof. L. Rizzo


Abstract:

In this talk I will give a survey of solutions and tools that we have developed in recent years to achieve extremely high packet processing rates in commodity operating systems, running on bare metal and on virtual machines.&#13;&#13;Our NETMAP framework supports processing of minimum size frames from user space at 10 Gbits per second (14.88 Mpps) with very small CPU usage. Netmap is hardware independent, supports multiple NIC types, and it does not require IOMMU or expose critical resources (e.g. device registers) to userspace. A libpcap library running on top of netmap gives instant acceleration to pcap clients without even the need to recompile applications.&#13;&#13;VALE is a software switch using the netmap API, which delivers over 20 Mpps per port, or 70 Gbits per second with 1500 byte packets. Originally designed to interconnect virtual machines, VALE is actually very convenient also as a testing tool and as a high speed IPC mechanism.&#13;&#13;More recently we have extended QEMU, and with a few small changes (using VAEL as a switch, paravirtualizing the e1000 emulator, and with small device driver enhancements), we reached guest to guest communication speeds of over 1 Mpps (with socket based clients) and 5 Mpps (with netmap based clients).&#13;&#13;NETMAP and VALE are small kernel modules, part of standard FreeBSD and also available as add-on for Linux. QEMU extensions are also available from the author and are being submitted to the qemu-devel list for inclusion in the standard distributions.</description>
	<itunes:author>Prof. L. Rizzo</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:12:05</itunes:duration>
	<enclosure length="1090134696" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/65-l_rizzo_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. L. Rizzo Abstract: In this talk I will give a survey of solutions and tools that we have developed in recent years to achieve extremely high packet processing rates in commodity operating systems, running on bare metal and on virtual machines. Our NETMAP framework supports processing of minimum size frames from user space at 10 Gbits per second (14.88 Mpps) with very small CPU usage. Netmap is hardware independent, supports multiple NIC types, and it does not require IOMMU or expose critical resources (e.g. device registers) to userspace. A libpcap library running on top of netmap gives instant acceleration to pcap clients without even the need to recompile applications. VALE is a software switch using the netmap API, which delivers over 20 Mpps per port, or 70 Gbits per second with 1500 byte packets. Originally designed to interconnect virtual machines, VALE is actually very convenient also as a testing tool and as a high speed IPC mechanism. More recently we have extended QEMU, and with a few small changes (using VAEL as a switch, paravirtualizing the e1000 emulator, and with small device driver enhancements), we reached guest to guest communication speeds of over 1 Mpps (with socket based clients) and 5 Mpps (with netmap based clients). NETMAP and VALE are small kernel modules, part of standard FreeBSD and also available as add-on for Linux. QEMU extensions are also available from the author and are being submitted to the qemu-devel list for inclusion in the standard distributions.</itunes:subtitle><itunes:summary>Speaker: Prof. L. Rizzo Abstract: In this talk I will give a survey of solutions and tools that we have developed in recent years to achieve extremely high packet processing rates in commodity operating systems, running on bare metal and on virtual machines. Our NETMAP framework supports processing of minimum size frames from user space at 10 Gbits per second (14.88 Mpps) with very small CPU usage. Netmap is hardware independent, supports multiple NIC types, and it does not require IOMMU or expose critical resources (e.g. device registers) to userspace. A libpcap library running on top of netmap gives instant acceleration to pcap clients without even the need to recompile applications. VALE is a software switch using the netmap API, which delivers over 20 Mpps per port, or 70 Gbits per second with 1500 byte packets. Originally designed to interconnect virtual machines, VALE is actually very convenient also as a testing tool and as a high speed IPC mechanism. More recently we have extended QEMU, and with a few small changes (using VAEL as a switch, paravirtualizing the e1000 emulator, and with small device driver enhancements), we reached guest to guest communication speeds of over 1 Mpps (with socket based clients) and 5 Mpps (with netmap based clients). NETMAP and VALE are small kernel modules, part of standard FreeBSD and also available as add-on for Linux. QEMU extensions are also available from the author and are being submitted to the qemu-devel list for inclusion in the standard distributions.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>ROMA: Random Overlook Mastering ATFM</title>
	<link>http://www.hamilton.ie/seminars/videos/64-c_lancia_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/64-c_lancia_hi.mp4</guid>
	<pubDate>Thu, 21 Mar 2013 00:01:04 +0000</pubDate>
	<description>Speaker:

C. Lancia


Abstract:

Consider the arrival process defined by t_i=i + \xi_i, where \xi_i are i.i.d random variables. First introduced in the 50's, this arrival process is of remarkable importance in Air Traffic Flow Management and other transportation systems, where scheduled arrivals are intrinsically subject to random variations; other frameworks where this model has proved to be capable of a good description of actual job arrivals include health care and crane handling systems. This talk is ideally divided in two parts. &#13;&#13;In the first half, I will show through numerical simulations two of the most important features of the model, namely, the insensitivity with respect to the nature (i.e. the law) of the delays \xi_i and the extremely valuable goodness of fit of simulated queue length distribution against the empirical distribution obtained from actual arrivals at London Heathrow airport. Further, I will show that the congestion related to this process is very different from the congestion of a Poisson process. This is due to the negative autocorrelation of the process. &#13;&#13;In the second part, I will restrict the analysis to the case where the delays \xi_i are exponentially distributed. In this context, I will show some preliminary results on a possible strategy to find the stationary distribution of the queue length using a bivariate generating function.</description>
	<itunes:author>C. Lancia</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>39:16</itunes:duration>
	<enclosure length="576393751" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/64-c_lancia_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: C. Lancia Abstract: Consider the arrival process defined by t_i=i + \xi_i, where \xi_i are i.i.d random variables. First introduced in the 50's, this arrival process is of remarkable importance in Air Traffic Flow Management and other transportation systems, where scheduled arrivals are intrinsically subject to random variations; other frameworks where this model has proved to be capable of a good description of actual job arrivals include health care and crane handling systems. This talk is ideally divided in two parts. In the first half, I will show through numerical simulations two of the most important features of the model, namely, the insensitivity with respect to the nature (i.e. the law) of the delays \xi_i and the extremely valuable goodness of fit of simulated queue length distribution against the empirical distribution obtained from actual arrivals at London Heathrow airport. Further, I will show that the congestion related to this process is very different from the congestion of a Poisson process. This is due to the negative autocorrelation of the process. In the second part, I will restrict the analysis to the case where the delays \xi_i are exponentially distributed. In this context, I will show some preliminary results on a possible strategy to find the stationary distribution of the queue length using a bivariate generating function.</itunes:subtitle><itunes:summary>Speaker: C. Lancia Abstract: Consider the arrival process defined by t_i=i + \xi_i, where \xi_i are i.i.d random variables. First introduced in the 50's, this arrival process is of remarkable importance in Air Traffic Flow Management and other transportation systems, where scheduled arrivals are intrinsically subject to random variations; other frameworks where this model has proved to be capable of a good description of actual job arrivals include health care and crane handling systems. This talk is ideally divided in two parts. In the first half, I will show through numerical simulations two of the most important features of the model, namely, the insensitivity with respect to the nature (i.e. the law) of the delays \xi_i and the extremely valuable goodness of fit of simulated queue length distribution against the empirical distribution obtained from actual arrivals at London Heathrow airport. Further, I will show that the congestion related to this process is very different from the congestion of a Poisson process. This is due to the negative autocorrelation of the process. In the second part, I will restrict the analysis to the case where the delays \xi_i are exponentially distributed. In this context, I will show some preliminary results on a possible strategy to find the stationary distribution of the queue length using a bivariate generating function.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Machine-to-Machine in Smart Cities &amp; Smart Grids Vision, Technology &amp; Applications</title>
	<link>http://www.hamilton.ie/seminars/videos/63-m_dohler_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/63-m_dohler_hi.mp4</guid>
	<pubDate>Mon, 21 Jan 2013 00:01:03 +0000</pubDate>
	<description>Speaker:

Dr. M. Dohler


Abstract:

The unprecedented communication paradigm of machine-to-machine (M2M), facilitating 24/7 ultra-reliable connectivity between a prior unseen number of automated devices, is currently gripping both industrial as well as academic communities. Whilst applications are diverse, the in-home market is of particular interest since undergoing a fundamental shift of machine-to-human communications towards fully automatized M2M. The aim of this presentation is thus to provide academic, technical and industrial insights into latest key aspects of wireless M2M networks, with particular application to the emerging smart city and smart grid verticals. &#13;&#13;Notably, I will provide an introduction to the particularities of M2M systems. Architectural, technical and privacy requirements, and thus applicable technologies will be discussed. Notably, we will dwell on the capillary and cellular embodiments of M2M in smart homes. The focus of capillary M2M, useful for real-time data gathering in homes, will be on IEEE (.15.4e) and IETF (6LoWPAN, ROLL, COAP) standards compliant low-power multihop networking designs; furthermore, for the first time, low power Wifi will be dealt with and positioned into the eco-system of capillary M2M. The focus of cellular M2M will be on latest activities, status and trends in leading M2M standardization bodies with technical focus on ETSI M2M and 3GPP LTE-MTC. Open technical challenges, along with the industry’s vision on M2M and its shift of industries, will be discussed during the talk.&#13;</description>
	<itunes:author>Dr. M. Dohler</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:18:04</itunes:duration>
	<enclosure length="1218801562" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/63-m_dohler_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. M. Dohler Abstract: The unprecedented communication paradigm of machine-to-machine (M2M), facilitating 24/7 ultra-reliable connectivity between a prior unseen number of automated devices, is currently gripping both industrial as well as academic communities. Whilst applications are diverse, the in-home market is of particular interest since undergoing a fundamental shift of machine-to-human communications towards fully automatized M2M. The aim of this presentation is thus to provide academic, technical and industrial insights into latest key aspects of wireless M2M networks, with particular application to the emerging smart city and smart grid verticals. Notably, I will provide an introduction to the particularities of M2M systems. Architectural, technical and privacy requirements, and thus applicable technologies will be discussed. Notably, we will dwell on the capillary and cellular embodiments of M2M in smart homes. The focus of capillary M2M, useful for real-time data gathering in homes, will be on IEEE (.15.4e) and IETF (6LoWPAN, ROLL, COAP) standards compliant low-power multihop networking designs; furthermore, for the first time, low power Wifi will be dealt with and positioned into the eco-system of capillary M2M. The focus of cellular M2M will be on latest activities, status and trends in leading M2M standardization bodies with technical focus on ETSI M2M and 3GPP LTE-MTC. Open technical challenges, along with the industry’s vision on M2M and its shift of industries, will be discussed during the talk.</itunes:subtitle><itunes:summary>Speaker: Dr. M. Dohler Abstract: The unprecedented communication paradigm of machine-to-machine (M2M), facilitating 24/7 ultra-reliable connectivity between a prior unseen number of automated devices, is currently gripping both industrial as well as academic communities. Whilst applications are diverse, the in-home market is of particular interest since undergoing a fundamental shift of machine-to-human communications towards fully automatized M2M. The aim of this presentation is thus to provide academic, technical and industrial insights into latest key aspects of wireless M2M networks, with particular application to the emerging smart city and smart grid verticals. Notably, I will provide an introduction to the particularities of M2M systems. Architectural, technical and privacy requirements, and thus applicable technologies will be discussed. Notably, we will dwell on the capillary and cellular embodiments of M2M in smart homes. The focus of capillary M2M, useful for real-time data gathering in homes, will be on IEEE (.15.4e) and IETF (6LoWPAN, ROLL, COAP) standards compliant low-power multihop networking designs; furthermore, for the first time, low power Wifi will be dealt with and positioned into the eco-system of capillary M2M. The focus of cellular M2M will be on latest activities, status and trends in leading M2M standardization bodies with technical focus on ETSI M2M and 3GPP LTE-MTC. Open technical challenges, along with the industry’s vision on M2M and its shift of industries, will be discussed during the talk.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>State Constrained Optimal Control</title>
	<link>http://www.hamilton.ie/seminars/videos/62-r_vinter_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/62-r_vinter_hi.mp4</guid>
	<pubDate>Thu, 29 Nov 2012 00:01:02 +0000</pubDate>
	<description>Speaker:

Prof. R. Vinter


Abstract:

Estimates on the distance of a nominal state trajectory from the set of state trajectories that are confined to a closed set have an important unifying role in optimal control theory. They can be used to establish non-degeneracy of optimality conditions such as the Pontryagin Maximum Principle, to show that the value function describing the sensitivity of the minimum cost to changes of the initial condition is characterized as a unique generalized solution to the Hamilton Jacobi equation, and for numerous other purposes. We discuss the validity of various presumed distance estimates and their implications, recent counter-examples illustrating some unexpected pathologies and pose some open questions.</description>
	<itunes:author>Prof. R. Vinter</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>59:16</itunes:duration>
	<enclosure length="906959739" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/62-r_vinter_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. R. Vinter Abstract: Estimates on the distance of a nominal state trajectory from the set of state trajectories that are confined to a closed set have an important unifying role in optimal control theory. They can be used to establish non-degeneracy of optimality conditions such as the Pontryagin Maximum Principle, to show that the value function describing the sensitivity of the minimum cost to changes of the initial condition is characterized as a unique generalized solution to the Hamilton Jacobi equation, and for numerous other purposes. We discuss the validity of various presumed distance estimates and their implications, recent counter-examples illustrating some unexpected pathologies and pose some open questions.</itunes:subtitle><itunes:summary>Speaker: Prof. R. Vinter Abstract: Estimates on the distance of a nominal state trajectory from the set of state trajectories that are confined to a closed set have an important unifying role in optimal control theory. They can be used to establish non-degeneracy of optimality conditions such as the Pontryagin Maximum Principle, to show that the value function describing the sensitivity of the minimum cost to changes of the initial condition is characterized as a unique generalized solution to the Hamilton Jacobi equation, and for numerous other purposes. We discuss the validity of various presumed distance estimates and their implications, recent counter-examples illustrating some unexpected pathologies and pose some open questions.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Effective Information Delivery Through Opportunistic Replication in Wireless Networks</title>
	<link>http://www.hamilton.ie/seminars/videos/61-l_tassiulas_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/61-l_tassiulas_hi.mp4</guid>
	<pubDate>Wed, 28 Nov 2012 00:01:01 +0000</pubDate>
	<description>Speaker:

Prof. L. Tassiulas


Abstract:

Increased replication of information is observed in modern wireless networks either in pre-planned content replication schemes or through opportunistic caching in intermediate relay nodes as the information flows to the final destination or through overhearing of broadcast information in the wireless channel. In all cases the available other node information might be used to effectively increase the efficiency of the information delivery process. We will consider first an information theoretic perspective and present a scheme that exploits the opportunistically available overheard information to achieve the Shannon capacity of the broadcast erasure channel. Then we will consider information transport in a multi-hop flat wireless network and present schemes for spatial information replication based on popularity, in association with any-casting routing schemes, that achieve asymptotically optimal performance.</description>
	<itunes:author>Prof. L. Tassiulas</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:18:07</itunes:duration>
	<enclosure length="1241683809" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/61-l_tassiulas_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. L. Tassiulas Abstract: Increased replication of information is observed in modern wireless networks either in pre-planned content replication schemes or through opportunistic caching in intermediate relay nodes as the information flows to the final destination or through overhearing of broadcast information in the wireless channel. In all cases the available other node information might be used to effectively increase the efficiency of the information delivery process. We will consider first an information theoretic perspective and present a scheme that exploits the opportunistically available overheard information to achieve the Shannon capacity of the broadcast erasure channel. Then we will consider information transport in a multi-hop flat wireless network and present schemes for spatial information replication based on popularity, in association with any-casting routing schemes, that achieve asymptotically optimal performance.</itunes:subtitle><itunes:summary>Speaker: Prof. L. Tassiulas Abstract: Increased replication of information is observed in modern wireless networks either in pre-planned content replication schemes or through opportunistic caching in intermediate relay nodes as the information flows to the final destination or through overhearing of broadcast information in the wireless channel. In all cases the available other node information might be used to effectively increase the efficiency of the information delivery process. We will consider first an information theoretic perspective and present a scheme that exploits the opportunistically available overheard information to achieve the Shannon capacity of the broadcast erasure channel. Then we will consider information transport in a multi-hop flat wireless network and present schemes for spatial information replication based on popularity, in association with any-casting routing schemes, that achieve asymptotically optimal performance.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Dynamics of Some Cholera Models</title>
	<link>http://www.hamilton.ie/seminars/videos/60-p_vandendriessche_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/60-p_vandendriessche_hi.mp4</guid>
	<pubDate>Thu, 22 Nov 2012 00:01:00 +0000</pubDate>
	<description>Speaker:

Prof. P. van den Driessche


Abstract:

The World Health Organization estimates that there are 3 to 5 million cholera cases per year with 100 thousand deaths spread over 40 to 50 countries. For example, there has been a recent cholera outbreak in Haiti. Cholera is a bacterial disease caused by the bacterium Vibrio cholerae, which can be transmitted to humans directly by person to person contact or indirectly via the environment (mainly through contaminated water). To better understand the dynamics of cholera, ageneral ordinary differential equation compartmental model is formulated that incorporates these two transmission pathways as well as multiple infection stages and pathogen states. In the model analysis, some matrix theory is used to derive a basic reproduction number, and Lyapunov functions are used to show that this number gives a sharp threshold determining whether cholera dies out or becomes endemic. In the absence of recruitment and death, a final size equation or inequality is derived, and simulations illustrate how assumptions on cholera transmission affect the final size of the epidemic. Further models that incorporate temporary immunity and hyperinfectivity using distributed delays are formulated, and numerical simulations show that oscillatory solutions may occur for parameter values taken from cholera data in the literature.</description>
	<itunes:author>Prof. P. van den Driessche</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:01:22</itunes:duration>
	<enclosure length="902257817" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/60-p_vandendriessche_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. P. van den Driessche Abstract: The World Health Organization estimates that there are 3 to 5 million cholera cases per year with 100 thousand deaths spread over 40 to 50 countries. For example, there has been a recent cholera outbreak in Haiti. Cholera is a bacterial disease caused by the bacterium Vibrio cholerae, which can be transmitted to humans directly by person to person contact or indirectly via the environment (mainly through contaminated water). To better understand the dynamics of cholera, ageneral ordinary differential equation compartmental model is formulated that incorporates these two transmission pathways as well as multiple infection stages and pathogen states. In the model analysis, some matrix theory is used to derive a basic reproduction number, and Lyapunov functions are used to show that this number gives a sharp threshold determining whether cholera dies out or becomes endemic. In the absence of recruitment and death, a final size equation or inequality is derived, and simulations illustrate how assumptions on cholera transmission affect the final size of the epidemic. Further models that incorporate temporary immunity and hyperinfectivity using distributed delays are formulated, and numerical simulations show that oscillatory solutions may occur for parameter values taken from cholera data in the literature.</itunes:subtitle><itunes:summary>Speaker: Prof. P. van den Driessche Abstract: The World Health Organization estimates that there are 3 to 5 million cholera cases per year with 100 thousand deaths spread over 40 to 50 countries. For example, there has been a recent cholera outbreak in Haiti. Cholera is a bacterial disease caused by the bacterium Vibrio cholerae, which can be transmitted to humans directly by person to person contact or indirectly via the environment (mainly through contaminated water). To better understand the dynamics of cholera, ageneral ordinary differential equation compartmental model is formulated that incorporates these two transmission pathways as well as multiple infection stages and pathogen states. In the model analysis, some matrix theory is used to derive a basic reproduction number, and Lyapunov functions are used to show that this number gives a sharp threshold determining whether cholera dies out or becomes endemic. In the absence of recruitment and death, a final size equation or inequality is derived, and simulations illustrate how assumptions on cholera transmission affect the final size of the epidemic. Further models that incorporate temporary immunity and hyperinfectivity using distributed delays are formulated, and numerical simulations show that oscillatory solutions may occur for parameter values taken from cholera data in the literature.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Distributed Opportunistic Scheduling: A Control Theoretic Approach</title>
	<link>http://www.hamilton.ie/seminars/videos/59-a_banchs_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/59-a_banchs_hi.mp4</guid>
	<pubDate>Wed, 10 Oct 2012 00:00:59 +0100</pubDate>
	<description>Speaker:

Prof. A. Banchs


Abstract:

Distributed Opportunistic Scheduling (DOS) techniques have been recently proposed to improve the throughput performance of wireless networks. With DOS, each station contends for the channel with a certain access probability. If a contention is successful, the station measures the channel conditions and transmits in case the channel quality is above a certain threshold. Otherwise, the station does not use the transmission opportunity, allowing all stations to recontend. A key challenge with DOS is to design a distributed algorithm that optimally adjusts the access probability and the threshold of each station. To address this challenge, in this paper we first compute the configuration of these two parameters that jointly optimizes throughput performance in terms of proportional fairness. Then, we propose an adaptive algorithm based on control theory that converges to the desired point of operation. Finally, we conduct a control theoretic analysis of the algorithm to find a setting for its parameters that provides a good tradeoff between stability and speed of convergence. Simulation results validate the design of the proposed mechanism and confirm its advantages over previous proposals.</description>
	<itunes:author>Prof. A. Banchs</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>59:32</itunes:duration>
	<enclosure length="909272238" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/59-a_banchs_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. A. Banchs Abstract: Distributed Opportunistic Scheduling (DOS) techniques have been recently proposed to improve the throughput performance of wireless networks. With DOS, each station contends for the channel with a certain access probability. If a contention is successful, the station measures the channel conditions and transmits in case the channel quality is above a certain threshold. Otherwise, the station does not use the transmission opportunity, allowing all stations to recontend. A key challenge with DOS is to design a distributed algorithm that optimally adjusts the access probability and the threshold of each station. To address this challenge, in this paper we first compute the configuration of these two parameters that jointly optimizes throughput performance in terms of proportional fairness. Then, we propose an adaptive algorithm based on control theory that converges to the desired point of operation. Finally, we conduct a control theoretic analysis of the algorithm to find a setting for its parameters that provides a good tradeoff between stability and speed of convergence. Simulation results validate the design of the proposed mechanism and confirm its advantages over previous proposals.</itunes:subtitle><itunes:summary>Speaker: Prof. A. Banchs Abstract: Distributed Opportunistic Scheduling (DOS) techniques have been recently proposed to improve the throughput performance of wireless networks. With DOS, each station contends for the channel with a certain access probability. If a contention is successful, the station measures the channel conditions and transmits in case the channel quality is above a certain threshold. Otherwise, the station does not use the transmission opportunity, allowing all stations to recontend. A key challenge with DOS is to design a distributed algorithm that optimally adjusts the access probability and the threshold of each station. To address this challenge, in this paper we first compute the configuration of these two parameters that jointly optimizes throughput performance in terms of proportional fairness. Then, we propose an adaptive algorithm based on control theory that converges to the desired point of operation. Finally, we conduct a control theoretic analysis of the algorithm to find a setting for its parameters that provides a good tradeoff between stability and speed of convergence. Simulation results validate the design of the proposed mechanism and confirm its advantages over previous proposals.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Large-scale urban vehicular networks: mobility and connectivity</title>
	<link>http://www.hamilton.ie/seminars/videos/58-m_fiore_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/58-m_fiore_hi.mp4</guid>
	<pubDate>Fri, 05 Oct 2012 00:00:58 +0100</pubDate>
	<description>Speaker:

Dr. M. Fiore


Abstract:

Vehicular networks are large scale communication systems that exploit wireless technologies to interconnect moving cars. Vehicular networks are envisioned to provide drivers with real time information on potential dangers, on road traffic conditions, and on travel times, thus improving road safety and traffic efficiency. Direct vehicle-to-vehicle communication is also foreseen to enable nonsafety applications, such as pervasive urban sensing and fast data dissemination throughout metropolitan regions. The quantity and relevance of potential usages make pervasive inter-vehicular communication one of the highest impact future applications of the wireless technology, which explains the growing interest of both industry and academy towards this research field. In this talk, we will address two intertwined topics in vehicular networks: the modeling of vehicular mobility in large scale urban environments and the topological characterization of the vehicular network built on top of such a mobility. Both are fundamental, yet often oversought, aspects of vehicular networking, defining the strengths and weaknesses of the vehicle-to-vehicle communication system and dictating the rules for the design of dedicated protocols.</description>
	<itunes:author>Dr. M. Fiore</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>52:44</itunes:duration>
	<enclosure length="810616650" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/58-m_fiore_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. M. Fiore Abstract: Vehicular networks are large scale communication systems that exploit wireless technologies to interconnect moving cars. Vehicular networks are envisioned to provide drivers with real time information on potential dangers, on road traffic conditions, and on travel times, thus improving road safety and traffic efficiency. Direct vehicle-to-vehicle communication is also foreseen to enable nonsafety applications, such as pervasive urban sensing and fast data dissemination throughout metropolitan regions. The quantity and relevance of potential usages make pervasive inter-vehicular communication one of the highest impact future applications of the wireless technology, which explains the growing interest of both industry and academy towards this research field. In this talk, we will address two intertwined topics in vehicular networks: the modeling of vehicular mobility in large scale urban environments and the topological characterization of the vehicular network built on top of such a mobility. Both are fundamental, yet often oversought, aspects of vehicular networking, defining the strengths and weaknesses of the vehicle-to-vehicle communication system and dictating the rules for the design of dedicated protocols.</itunes:subtitle><itunes:summary>Speaker: Dr. M. Fiore Abstract: Vehicular networks are large scale communication systems that exploit wireless technologies to interconnect moving cars. Vehicular networks are envisioned to provide drivers with real time information on potential dangers, on road traffic conditions, and on travel times, thus improving road safety and traffic efficiency. Direct vehicle-to-vehicle communication is also foreseen to enable nonsafety applications, such as pervasive urban sensing and fast data dissemination throughout metropolitan regions. The quantity and relevance of potential usages make pervasive inter-vehicular communication one of the highest impact future applications of the wireless technology, which explains the growing interest of both industry and academy towards this research field. In this talk, we will address two intertwined topics in vehicular networks: the modeling of vehicular mobility in large scale urban environments and the topological characterization of the vehicular network built on top of such a mobility. Both are fundamental, yet often oversought, aspects of vehicular networking, defining the strengths and weaknesses of the vehicle-to-vehicle communication system and dictating the rules for the design of dedicated protocols.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Learning Cell Cycle Variability at the&#13;Level of each phase</title>
	<link>http://www.hamilton.ie/seminars/videos/57-t_weber_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/57-t_weber_hi.mp4</guid>
	<pubDate>Thu, 27 Sep 2012 00:00:57 +0100</pubDate>
	<description>Speaker:

Dr. T. Weber


Abstract:

Inter-cellular variability in the duration of the cell cycle is a well documented phenomena which has been integrated into mathematical models of cell proliferation since the 70’s. Here I present a minimalist stochastic cell cycle model that allows for inter-cellular variability at the level of each single phase, i.e. G1, S and G2M. Fitting this model to flow cytometry data from 5-bromo-2'-deoxyuridine (BrdU) pulse labeling experiments of two different cell lines shows that the mean field predictions mimic closely the measured average kinetics. However as indicated by bayesian inference, scenarios with deterministic or purely stochastic waiting times especially in the G1 phase seem to explain the data equally well. To resolve this uncertainty a novel experimental proto col is proposed able to provide sufficient information about cell kinetics to fully determine both the inter-cellular average and variance of the duration of each of the phases. Finally I present a case in which this model is extended in order to estimate cell cycle parameters in germinal centers. The latter play a central role in the generation of highly effective antibodies that protect our body against invading pathogens.</description>
	<itunes:author>Dr. T. Weber</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>0:43:06</itunes:duration>
	<enclosure length="630819561" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/57-t_weber_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. T. Weber Abstract: Inter-cellular variability in the duration of the cell cycle is a well documented phenomena which has been integrated into mathematical models of cell proliferation since the 70’s. Here I present a minimalist stochastic cell cycle model that allows for inter-cellular variability at the level of each single phase, i.e. G1, S and G2M. Fitting this model to flow cytometry data from 5-bromo-2'-deoxyuridine (BrdU) pulse labeling experiments of two different cell lines shows that the mean field predictions mimic closely the measured average kinetics. However as indicated by bayesian inference, scenarios with deterministic or purely stochastic waiting times especially in the G1 phase seem to explain the data equally well. To resolve this uncertainty a novel experimental proto col is proposed able to provide sufficient information about cell kinetics to fully determine both the inter-cellular average and variance of the duration of each of the phases. Finally I present a case in which this model is extended in order to estimate cell cycle parameters in germinal centers. The latter play a central role in the generation of highly effective antibodies that protect our body against invading pathogens.</itunes:subtitle><itunes:summary>Speaker: Dr. T. Weber Abstract: Inter-cellular variability in the duration of the cell cycle is a well documented phenomena which has been integrated into mathematical models of cell proliferation since the 70’s. Here I present a minimalist stochastic cell cycle model that allows for inter-cellular variability at the level of each single phase, i.e. G1, S and G2M. Fitting this model to flow cytometry data from 5-bromo-2'-deoxyuridine (BrdU) pulse labeling experiments of two different cell lines shows that the mean field predictions mimic closely the measured average kinetics. However as indicated by bayesian inference, scenarios with deterministic or purely stochastic waiting times especially in the G1 phase seem to explain the data equally well. To resolve this uncertainty a novel experimental proto col is proposed able to provide sufficient information about cell kinetics to fully determine both the inter-cellular average and variance of the duration of each of the phases. Finally I present a case in which this model is extended in order to estimate cell cycle parameters in germinal centers. The latter play a central role in the generation of highly effective antibodies that protect our body against invading pathogens.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>EPT functions: Non-negativity analysis, Levy processes and Financial applications</title>
	<link>http://www.hamilton.ie/seminars/videos/56-b_hanzon_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/56-b_hanzon_hi.mp4</guid>
	<pubDate>Mon, 17 Sep 2012 00:00:56 +0100</pubDate>
	<description>Speaker:

Prof. B. Hanzon


Abstract:

Exponential Polynomial Trigonometric (EPT) functions are being considered as probability density functions. A specific matrix-vector representation is proposed for doing calculations with these functions. We investigate when these functions are non-negative and under which conditions the density functions are infinitely divisible--in which case there is an associated Levy process. Application to option price computations in finance will be presented. &#13;&#13;For background information on this topic the website www.2-ept.com can be considered.</description>
	<itunes:author>Prof. B. Hanzon</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>0:59:22</itunes:duration>
	<enclosure length="881407230" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/56-b_hanzon_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. B. Hanzon Abstract: Exponential Polynomial Trigonometric (EPT) functions are being considered as probability density functions. A specific matrix-vector representation is proposed for doing calculations with these functions. We investigate when these functions are non-negative and under which conditions the density functions are infinitely divisible--in which case there is an associated Levy process. Application to option price computations in finance will be presented. For background information on this topic the website www.2-ept.com can be considered.</itunes:subtitle><itunes:summary>Speaker: Prof. B. Hanzon Abstract: Exponential Polynomial Trigonometric (EPT) functions are being considered as probability density functions. A specific matrix-vector representation is proposed for doing calculations with these functions. We investigate when these functions are non-negative and under which conditions the density functions are infinitely divisible--in which case there is an associated Levy process. Application to option price computations in finance will be presented. For background information on this topic the website www.2-ept.com can be considered.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Playing with Standards: the IEEE 802.11 case</title>
	<link>http://www.hamilton.ie/seminars/videos/55-f_gringoli_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/55-f_gringoli_hi.mp4</guid>
	<pubDate>Wed, 12 Sep 2012 00:00:55 +0100</pubDate>
	<description>Speaker:

Dr. F. Gringoli


Abstract:

Experimenting in the field is a key activity for the evolution of the modern Internet: this is especially true for radio access protocols like IEEE 802.11 that are usually affected by unpredictable issues due to noise, competing stations and interference. Here we introduce OpenFWWF, an opensource firmware that implements a fully compliant 802.11 MAC on off-the-shelf WiFi boards: we show how it can be used in conjunction with the Linux kernel to play with the wireless stack. To this end we further demonstrate how we can easily customize the basic DCF access firmware to explore either performance boosting variations or to measure physical properties of the wireless channel.</description>
	<itunes:author>Dr. F. Gringoli</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:02:47</itunes:duration>
	<enclosure length="922392905" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/55-f_gringoli_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. F. Gringoli Abstract: Experimenting in the field is a key activity for the evolution of the modern Internet: this is especially true for radio access protocols like IEEE 802.11 that are usually affected by unpredictable issues due to noise, competing stations and interference. Here we introduce OpenFWWF, an opensource firmware that implements a fully compliant 802.11 MAC on off-the-shelf WiFi boards: we show how it can be used in conjunction with the Linux kernel to play with the wireless stack. To this end we further demonstrate how we can easily customize the basic DCF access firmware to explore either performance boosting variations or to measure physical properties of the wireless channel.</itunes:subtitle><itunes:summary>Speaker: Dr. F. Gringoli Abstract: Experimenting in the field is a key activity for the evolution of the modern Internet: this is especially true for radio access protocols like IEEE 802.11 that are usually affected by unpredictable issues due to noise, competing stations and interference. Here we introduce OpenFWWF, an opensource firmware that implements a fully compliant 802.11 MAC on off-the-shelf WiFi boards: we show how it can be used in conjunction with the Linux kernel to play with the wireless stack. To this end we further demonstrate how we can easily customize the basic DCF access firmware to explore either performance boosting variations or to measure physical properties of the wireless channel.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>In Search of Optimality: Network Coding for Wireless Networks</title>
	<link>http://www.hamilton.ie/seminars/videos/54-m_chaudry_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/54-m_chaudry_hi.mp4</guid>
	<pubDate>Wed, 29 Aug 2012 00:00:54 +0100</pubDate>
	<description>Speaker:

Dr. M. A. Chaudry


Abstract:

Network coding has gained significant interest from the research community since the first paper by Alshwede et al., in 2000. Network coding techniques can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. We focus on network coding for wireless networks; specifically we investigate the Index Coding problem.&#13;&#13;In wireless networks, each transmitted packet is broadcasted within a certain region and can be overheard by the nearby users. When a user needs to transmit packets, it employs the Index Coding that uses the knowledge of what the user's neighbors have heard previously (side information) in order to reduce the number of transmissions. The objective is to satisfy the demands of all the users with the minimum number of transmissions. With the Index Coding, each transmitted packet can be a combination of the original packets. The Index Coding problem has been proven to be NP-hard, and NP-hard to approximate.&#13;&#13;Noting that the Index Coding problem is not only NP-hard but NP-hard to approximate, we look at it from a novel perspective and define the Complementary Index Coding problem; where the objective is to maximize the number of transmissions that are saved by employing the Index Coding compared to the solution that does not involve coding. We prove that the Complementary Index Coding problem can be approximated in several cases of practical importance. We investigate both the multiple unicast and multiple multicast scenarios for the Complementary Index Coding problem for computational complexity, and provide polynomial time approximation algorithms.</description>
	<itunes:author>Dr. M. A. Chaudry</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>59:52</itunes:duration>
	<enclosure length="883865278" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/54-m_chaudry_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. M. A. Chaudry Abstract: Network coding has gained significant interest from the research community since the first paper by Alshwede et al., in 2000. Network coding techniques can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. We focus on network coding for wireless networks; specifically we investigate the Index Coding problem. In wireless networks, each transmitted packet is broadcasted within a certain region and can be overheard by the nearby users. When a user needs to transmit packets, it employs the Index Coding that uses the knowledge of what the user's neighbors have heard previously (side information) in order to reduce the number of transmissions. The objective is to satisfy the demands of all the users with the minimum number of transmissions. With the Index Coding, each transmitted packet can be a combination of the original packets. The Index Coding problem has been proven to be NP-hard, and NP-hard to approximate. Noting that the Index Coding problem is not only NP-hard but NP-hard to approximate, we look at it from a novel perspective and define the Complementary Index Coding problem; where the objective is to maximize the number of transmissions that are saved by employing the Index Coding compared to the solution that does not involve coding. We prove that the Complementary Index Coding problem can be approximated in several cases of practical importance. We investigate both the multiple unicast and multiple multicast scenarios for the Complementary Index Coding problem for computational complexity, and provide polynomial time approximation algorithms.</itunes:subtitle><itunes:summary>Speaker: Dr. M. A. Chaudry Abstract: Network coding has gained significant interest from the research community since the first paper by Alshwede et al., in 2000. Network coding techniques can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. We focus on network coding for wireless networks; specifically we investigate the Index Coding problem. In wireless networks, each transmitted packet is broadcasted within a certain region and can be overheard by the nearby users. When a user needs to transmit packets, it employs the Index Coding that uses the knowledge of what the user's neighbors have heard previously (side information) in order to reduce the number of transmissions. The objective is to satisfy the demands of all the users with the minimum number of transmissions. With the Index Coding, each transmitted packet can be a combination of the original packets. The Index Coding problem has been proven to be NP-hard, and NP-hard to approximate. Noting that the Index Coding problem is not only NP-hard but NP-hard to approximate, we look at it from a novel perspective and define the Complementary Index Coding problem; where the objective is to maximize the number of transmissions that are saved by employing the Index Coding compared to the solution that does not involve coding. We prove that the Complementary Index Coding problem can be approximated in several cases of practical importance. We investigate both the multiple unicast and multiple multicast scenarios for the Complementary Index Coding problem for computational complexity, and provide polynomial time approximation algorithms.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>On Continuous Counting and Learning&#13;&#13;in a Distributed System</title>
	<link>http://www.hamilton.ie/seminars/videos/53-b_radunovic_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/53-b_radunovic_hi.mp4</guid>
	<pubDate>Fri, 03 Aug 2012 00:00:53 +0100</pubDate>
	<description>Speaker:

Dr. B. Radunović


Abstract:

Consider a distributed system that consists of a coordinator node connected to multiple sites. Items from a data stream arrive to the system one by one, and are arbitrarily distributed to different sites. The goal of the system is to continuously track a function of the items received so far within a prescribed relative accuracy and at the lowest possible communication cost. This class of problems is called a continual distributed stream monitoring.&#13;&#13;In this talk we will focus on two problems from this class. We will first discuss the count tracking problem (counter), which is an important building block for other more complex algorithms. The goal of the counter is to keep a track of the sum of all the items from the stream at all times. We show that for a class of input loads a randomized algorithm guarantees to track the count accurately with high probability and has the expected communication cost that is sublinear in both data size and the number of sites. We also establish matching lower bounds. We then illustrate how our non-monotonic counter can be applied to solve more complex problems, such as to track the second frequency moment and the Bayesian linear regression of the input stream.&#13;&#13;We will next discuss the online non-stochastic experts problem in the continual distributed setting. Here, at each time-step, one of the sites has to pick one expert from the set of experts, and then the same site receives information about payoffs of all experts for that round. The goal of the distributed system is to minimize regret with respect to the optimal choice in hindsight, while simultaneously keeping communication to the minimum. This problem is well understood in the centralized setting, but the communication trade-off in the distributed setting is unknown. The two extreme solutions to this problem are to communicate with everyone after each payoff, and not to communicate at all. We will discuss how to achieve the trade-off between these two approaches. We will present an algorithm that achieves a non-trivial trade-off and show the difficulties of further improving its performance.</description>
	<itunes:author>Dr. B. Radunović</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:05:53</itunes:duration>
	<enclosure length="967146348" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/53-b_radunovic_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. B. Radunović Abstract: Consider a distributed system that consists of a coordinator node connected to multiple sites. Items from a data stream arrive to the system one by one, and are arbitrarily distributed to different sites. The goal of the system is to continuously track a function of the items received so far within a prescribed relative accuracy and at the lowest possible communication cost. This class of problems is called a continual distributed stream monitoring. In this talk we will focus on two problems from this class. We will first discuss the count tracking problem (counter), which is an important building block for other more complex algorithms. The goal of the counter is to keep a track of the sum of all the items from the stream at all times. We show that for a class of input loads a randomized algorithm guarantees to track the count accurately with high probability and has the expected communication cost that is sublinear in both data size and the number of sites. We also establish matching lower bounds. We then illustrate how our non-monotonic counter can be applied to solve more complex problems, such as to track the second frequency moment and the Bayesian linear regression of the input stream. We will next discuss the online non-stochastic experts problem in the continual distributed setting. Here, at each time-step, one of the sites has to pick one expert from the set of experts, and then the same site receives information about payoffs of all experts for that round. The goal of the distributed system is to minimize regret with respect to the optimal choice in hindsight, while simultaneously keeping communication to the minimum. This problem is well understood in the centralized setting, but the communication trade-off in the distributed setting is unknown. The two extreme solutions to this problem are to communicate with everyone after each payoff, and not to communicate at all. We will discuss how to achieve the trade-off between these two approaches. We will present an algorithm that achieves a non-trivial trade-off and show the difficulties of further improving its performance.</itunes:subtitle><itunes:summary>Speaker: Dr. B. Radunović Abstract: Consider a distributed system that consists of a coordinator node connected to multiple sites. Items from a data stream arrive to the system one by one, and are arbitrarily distributed to different sites. The goal of the system is to continuously track a function of the items received so far within a prescribed relative accuracy and at the lowest possible communication cost. This class of problems is called a continual distributed stream monitoring. In this talk we will focus on two problems from this class. We will first discuss the count tracking problem (counter), which is an important building block for other more complex algorithms. The goal of the counter is to keep a track of the sum of all the items from the stream at all times. We show that for a class of input loads a randomized algorithm guarantees to track the count accurately with high probability and has the expected communication cost that is sublinear in both data size and the number of sites. We also establish matching lower bounds. We then illustrate how our non-monotonic counter can be applied to solve more complex problems, such as to track the second frequency moment and the Bayesian linear regression of the input stream. We will next discuss the online non-stochastic experts problem in the continual distributed setting. Here, at each time-step, one of the sites has to pick one expert from the set of experts, and then the same site receives information about payoffs of all experts for that round. The goal of the distributed system is to minimize regret with respect to the optimal choice in hindsight, while simultaneously keeping communication to the minimum. This problem is well understood in the centralized setting, but the communication trade-off in the distributed setting is unknown. The two extreme solutions to this problem are to communicate with everyone after each payoff, and not to communicate at all. We will discuss how to achieve the trade-off between these two approaches. We will present an algorithm that achieves a non-trivial trade-off and show the difficulties of further improving its performance.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Multi-channel MAC Protocols for Wireless Sensor Networks</title>
	<link>http://www.hamilton.ie/seminars/videos/52-c_cano_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/52-c_cano_hi.mp4</guid>
	<pubDate>Tue, 31 Jul 2012 00:00:52 +0100</pubDate>
	<description>Speaker:

Dr. C. Cano


Abstract:

Wireless Sensor Networks (WSNs) are networks formed by&#13;highly constrained devices that communicate measured environmental&#13;data using low-power wireless transmissions. The increase of spectrum&#13;utilization in non-licensed bands along with the reduced power used by&#13;these nodes is expected to cause high interference problems in WSNs.&#13;Therefore, the design of new dynamic spectrum access techniques&#13;specifically tailored to these networks plays an important role for&#13;their future development. In this talk the main challenges of dynamic&#13;spectrum access in WSNs will be described and a first approach to&#13;coordinate sensor nodes will be presented.</description>
	<itunes:author>Dr. C. Cano</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>40:09</itunes:duration>
	<enclosure length="610094758" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/52-c_cano_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. C. Cano Abstract: Wireless Sensor Networks (WSNs) are networks formed by highly constrained devices that communicate measured environmental data using low-power wireless transmissions. The increase of spectrum utilization in non-licensed bands along with the reduced power used by these nodes is expected to cause high interference problems in WSNs. Therefore, the design of new dynamic spectrum access techniques specifically tailored to these networks plays an important role for their future development. In this talk the main challenges of dynamic spectrum access in WSNs will be described and a first approach to coordinate sensor nodes will be presented.</itunes:subtitle><itunes:summary>Speaker: Dr. C. Cano Abstract: Wireless Sensor Networks (WSNs) are networks formed by highly constrained devices that communicate measured environmental data using low-power wireless transmissions. The increase of spectrum utilization in non-licensed bands along with the reduced power used by these nodes is expected to cause high interference problems in WSNs. Therefore, the design of new dynamic spectrum access techniques specifically tailored to these networks plays an important role for their future development. In this talk the main challenges of dynamic spectrum access in WSNs will be described and a first approach to coordinate sensor nodes will be presented.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Networking Infrastructure and Data Management for Cyber-Physical Systems</title>
	<link>http://www.hamilton.ie/seminars/videos/51-s_han_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/51-s_han_hi.mp4</guid>
	<pubDate>Tue, 10 Jul 2012 00:00:51 +0100</pubDate>
	<description>Speaker:

S. Han


Abstract:

A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system's computational and physical elements. A large-scale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and real-time communication is a big challenge.&#13;&#13;In this talk, I will first present a TDMA-based low-power and secure real-time wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable real-time service support. I will describe the network management techniques for ensuring the reliable routing and real-time services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built up a prototype system and deployed it in different environments for performance measurement. I will also present a light-weighted and scalable solution for interconnecting heterogenous CPS subsystems together through a slim IP adaptation layer. This approach makes the underlying connectivity technologies transparent to the application developers thus enables rapid application development and efficient migration among different CPS platforms.</description>
	<itunes:author>S. Han</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:08:32</itunes:duration>
	<enclosure length="1090471677" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/51-s_han_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: S. Han Abstract: A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system's computational and physical elements. A large-scale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and real-time communication is a big challenge. In this talk, I will first present a TDMA-based low-power and secure real-time wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable real-time service support. I will describe the network management techniques for ensuring the reliable routing and real-time services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built up a prototype system and deployed it in different environments for performance measurement. I will also present a light-weighted and scalable solution for interconnecting heterogenous CPS subsystems together through a slim IP adaptation layer. This approach makes the underlying connectivity technologies transparent to the application developers thus enables rapid application development and efficient migration among different CPS platforms.</itunes:subtitle><itunes:summary>Speaker: S. Han Abstract: A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system's computational and physical elements. A large-scale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and real-time communication is a big challenge. In this talk, I will first present a TDMA-based low-power and secure real-time wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable real-time service support. I will describe the network management techniques for ensuring the reliable routing and real-time services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built up a prototype system and deployed it in different environments for performance measurement. I will also present a light-weighted and scalable solution for interconnecting heterogenous CPS subsystems together through a slim IP adaptation layer. This approach makes the underlying connectivity technologies transparent to the application developers thus enables rapid application development and efficient migration among different CPS platforms.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Cracking the Cutoff Window</title>
	<link>http://www.hamilton.ie/seminars/videos/50-c_lancia_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/50-c_lancia_hi.mp4</guid>
	<pubDate>Mon, 11 Jun 2012 00:00:50 +0100</pubDate>
	<description>Speaker:

C. Lancia


Abstract:

The cutoff phenomenon is the abrupt convergence to stationarity of a Markov chain. It is characterized by a narrow window centered around a cutoff-time in which the distance from stationarity suddenly drops from 1 to 0.&#13;&#13;All the examples in which cutoff was detected clearly indicate that a drift towards the opportune quantiles of the stationary measure could be held responsible for this phenomenon. In the case of birth- and- death chains this mechanism is fairly well understood.&#13;&#13;I will present a possible generalization of this picture to more general systems and show that there are two sources of randomness contributing to the size of the cutoff window. One is related to the drift towards the relevant quantiles of $\pi$ and the other to the thermalization in that region of the state space.&#13;&#13;For one-dimensional systems a sufficiently strong drift ensures that the thermalization is under control but for higher-dimensional models the thermalization contribution can grow wide the cutoff window and even destroy completely the phenomenon.</description>
	<itunes:author>C. Lancia</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>39:38</itunes:duration>
	<enclosure length="586903472" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/50-c_lancia_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: C. Lancia Abstract: The cutoff phenomenon is the abrupt convergence to stationarity of a Markov chain. It is characterized by a narrow window centered around a cutoff-time in which the distance from stationarity suddenly drops from 1 to 0. All the examples in which cutoff was detected clearly indicate that a drift towards the opportune quantiles of the stationary measure could be held responsible for this phenomenon. In the case of birth- and- death chains this mechanism is fairly well understood. I will present a possible generalization of this picture to more general systems and show that there are two sources of randomness contributing to the size of the cutoff window. One is related to the drift towards the relevant quantiles of $\pi$ and the other to the thermalization in that region of the state space. For one-dimensional systems a sufficiently strong drift ensures that the thermalization is under control but for higher-dimensional models the thermalization contribution can grow wide the cutoff window and even destroy completely the phenomenon.</itunes:subtitle><itunes:summary>Speaker: C. Lancia Abstract: The cutoff phenomenon is the abrupt convergence to stationarity of a Markov chain. It is characterized by a narrow window centered around a cutoff-time in which the distance from stationarity suddenly drops from 1 to 0. All the examples in which cutoff was detected clearly indicate that a drift towards the opportune quantiles of the stationary measure could be held responsible for this phenomenon. In the case of birth- and- death chains this mechanism is fairly well understood. I will present a possible generalization of this picture to more general systems and show that there are two sources of randomness contributing to the size of the cutoff window. One is related to the drift towards the relevant quantiles of $\pi$ and the other to the thermalization in that region of the state space. For one-dimensional systems a sufficiently strong drift ensures that the thermalization is under control but for higher-dimensional models the thermalization contribution can grow wide the cutoff window and even destroy completely the phenomenon.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Reaching Consensus about Gossip</title>
	<link>http://www.hamilton.ie/seminars/videos/49-p_thiran_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/49-p_thiran_hi.mp4</guid>
	<pubDate>Mon, 28 May 2012 00:00:49 +0100</pubDate>
	<description>Speaker:

Prof. P. Thiran


Abstract:

An increasingly larger number of applications require networks to perform decentralized computations over distributed data. A representative problem of these “in-network processing” tasks is the distributed computation of the average of values present at nodes of a network, known as gossip algorithms. They have received recently significant attention across different communities (networking, algorithms, signal processing, control) because they constitute simple and robust methods for distributed information processing over networks.&#13;&#13;The first part of the talk is a survey some recent results on real-valued (analog) gossip algorithms. For many topologies that are realistic for wireless sensor networks, the classical nearest-neighbor gossip algorithms are slow, but a variation of these algorithms can be proven to order optimal (cost of O(n) messages for a network of n nodes) for some random geometric graphs. A second improvement, inspired by Uniform Gossip, allows to use uni-directional paths to compute the average, instead of requiring to route the average back and forth along the same path (one way paths are better suited in highly dynamic networks).&#13;&#13;The second part of the talk is devoted to quantized gossip on arbitrary connected networks. By their nature, quantized algorithms cannot produce a real, analog average, but they can (almost surely) reach consensus on the quantized interval that contains the average, in finite time.&#13;&#13;(This is a joint work with Florence Benezit, Martin Vetterli, Alex Dimakis, Vincent Blondel and John Tsitsiklis.)</description>
	<itunes:author>Prof. P. Thiran</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:12:03</itunes:duration>
	<enclosure length="1131639716" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/49-p_thiran_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. P. Thiran Abstract: An increasingly larger number of applications require networks to perform decentralized computations over distributed data. A representative problem of these “in-network processing” tasks is the distributed computation of the average of values present at nodes of a network, known as gossip algorithms. They have received recently significant attention across different communities (networking, algorithms, signal processing, control) because they constitute simple and robust methods for distributed information processing over networks. The first part of the talk is a survey some recent results on real-valued (analog) gossip algorithms. For many topologies that are realistic for wireless sensor networks, the classical nearest-neighbor gossip algorithms are slow, but a variation of these algorithms can be proven to order optimal (cost of O(n) messages for a network of n nodes) for some random geometric graphs. A second improvement, inspired by Uniform Gossip, allows to use uni-directional paths to compute the average, instead of requiring to route the average back and forth along the same path (one way paths are better suited in highly dynamic networks). The second part of the talk is devoted to quantized gossip on arbitrary connected networks. By their nature, quantized algorithms cannot produce a real, analog average, but they can (almost surely) reach consensus on the quantized interval that contains the average, in finite time. (This is a joint work with Florence Benezit, Martin Vetterli, Alex Dimakis, Vincent Blondel and John Tsitsiklis.)</itunes:subtitle><itunes:summary>Speaker: Prof. P. Thiran Abstract: An increasingly larger number of applications require networks to perform decentralized computations over distributed data. A representative problem of these “in-network processing” tasks is the distributed computation of the average of values present at nodes of a network, known as gossip algorithms. They have received recently significant attention across different communities (networking, algorithms, signal processing, control) because they constitute simple and robust methods for distributed information processing over networks. The first part of the talk is a survey some recent results on real-valued (analog) gossip algorithms. For many topologies that are realistic for wireless sensor networks, the classical nearest-neighbor gossip algorithms are slow, but a variation of these algorithms can be proven to order optimal (cost of O(n) messages for a network of n nodes) for some random geometric graphs. A second improvement, inspired by Uniform Gossip, allows to use uni-directional paths to compute the average, instead of requiring to route the average back and forth along the same path (one way paths are better suited in highly dynamic networks). The second part of the talk is devoted to quantized gossip on arbitrary connected networks. By their nature, quantized algorithms cannot produce a real, analog average, but they can (almost surely) reach consensus on the quantized interval that contains the average, in finite time. (This is a joint work with Florence Benezit, Martin Vetterli, Alex Dimakis, Vincent Blondel and John Tsitsiklis.)</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>The Role of Kemeny's Constant in Properties of Markov Chains</title>
	<link>http://www.hamilton.ie/seminars/videos/48-j_hunter_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/48-j_hunter_hi.mp4</guid>
	<pubDate>Wed, 09 May 2012 00:00:48 +0100</pubDate>
	<description>Speaker:

Prof. J. J. Hunter


Abstract:

In a finite m-state irreducible Markov chain with stationary probabilities {\pi_i} and mean first passage times m_{ij} (mean recurrence time when i=j) it was first shown, by Kemeny and Snell, that \sum_{j=1}^{m}\pi_jm_{ij}	is a constant, K, not depending on i. This constant has since become known as Kemeny’s constant. We consider a variety of techniques for finding expressions for K, derive some bounds for K, and explore various applications and interpretations of theseresults. Interpretations include the expected number of links that a surfer on the World Wide Web located on a random page needs to follow before reaching a desired location, as well as the expected time to mixing in a Markov chain. Various applications have been considered including some perturbation results, mixing on directed graphs and its relation to the Kirchhoff index of regular graphs.</description>
	<itunes:author>Prof. J. J. Hunter</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>52:12</itunes:duration>
	<enclosure length="832845329" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/48-j_hunter_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. J. J. Hunter Abstract: In a finite m-state irreducible Markov chain with stationary probabilities {\pi_i} and mean first passage times m_{ij} (mean recurrence time when i=j) it was first shown, by Kemeny and Snell, that \sum_{j=1}^{m}\pi_jm_{ij} is a constant, K, not depending on i. This constant has since become known as Kemeny’s constant. We consider a variety of techniques for finding expressions for K, derive some bounds for K, and explore various applications and interpretations of theseresults. Interpretations include the expected number of links that a surfer on the World Wide Web located on a random page needs to follow before reaching a desired location, as well as the expected time to mixing in a Markov chain. Various applications have been considered including some perturbation results, mixing on directed graphs and its relation to the Kirchhoff index of regular graphs.</itunes:subtitle><itunes:summary>Speaker: Prof. J. J. Hunter Abstract: In a finite m-state irreducible Markov chain with stationary probabilities {\pi_i} and mean first passage times m_{ij} (mean recurrence time when i=j) it was first shown, by Kemeny and Snell, that \sum_{j=1}^{m}\pi_jm_{ij} is a constant, K, not depending on i. This constant has since become known as Kemeny’s constant. We consider a variety of techniques for finding expressions for K, derive some bounds for K, and explore various applications and interpretations of theseresults. Interpretations include the expected number of links that a surfer on the World Wide Web located on a random page needs to follow before reaching a desired location, as well as the expected time to mixing in a Markov chain. Various applications have been considered including some perturbation results, mixing on directed graphs and its relation to the Kirchhoff index of regular graphs.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Experiences in Industrial Mathematics in Ireland</title>
	<link>http://www.hamilton.ie/seminars/videos/47-s_obrien_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/47-s_obrien_hi.mp4</guid>
	<pubDate>Mon, 23 Apr 2012 00:00:47 +0100</pubDate>
	<description>Speaker:

Prof. S. O'Brien


Abstract:

In the context of the Macsi industrial mathematics group, we look at the types of problems which have arisen from industrial collaboration and examine a couple of these in detail.&#13;&#13;In particular, we look at a mathematical model for etching glass with acids which arose from a study group with industry problem presented by Waterford Crystal.</description>
	<itunes:author>Prof. S. O'Brien</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>56:25</itunes:duration>
	<enclosure length="858332413" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/47-s_obrien_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. S. O'Brien Abstract: In the context of the Macsi industrial mathematics group, we look at the types of problems which have arisen from industrial collaboration and examine a couple of these in detail. In particular, we look at a mathematical model for etching glass with acids which arose from a study group with industry problem presented by Waterford Crystal.</itunes:subtitle><itunes:summary>Speaker: Prof. S. O'Brien Abstract: In the context of the Macsi industrial mathematics group, we look at the types of problems which have arisen from industrial collaboration and examine a couple of these in detail. In particular, we look at a mathematical model for etching glass with acids which arose from a study group with industry problem presented by Waterford Crystal.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Geographically weighted regression: modelling spatial heterogeneity</title>
	<link>http://www.hamilton.ie/seminars/videos/46-m_charlton_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/46-m_charlton_hi.mp4</guid>
	<pubDate>Wed, 21 Mar 2012 00:00:46 +0000</pubDate>
	<description>Speaker:

Martin Charlton


Abstract:

Geographically Weighted Regression is a technique for exploratory spatial data analysis. In "normal" regression with data for spatial objects we assume that the relationship we are modelling is uniform across the study area - that is, the estimated regression parameters are "whole-map" statistics. In many situations this is not necessarily the case, as mapping the residuals (the differences between the observed and predicted data) may reveal. Many different solutions have been proposed for dealing with spatial variation in these relationships. GWR provides means of modelling such relationships.&#13;&#13;This seminar outlines the characteristics of spatial data and the challenges its use poses for analysis, the ideas underpinning geographically weighted regression and details the process of estimating and interpreting the outputs from GWR models. We finish with a brief survey of current issues in GWR and potential future developments.</description>
	<itunes:author>Martin Charlton</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:05:02</itunes:duration>
	<enclosure length="998082965" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/46-m_charlton_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Martin Charlton Abstract: Geographically Weighted Regression is a technique for exploratory spatial data analysis. In "normal" regression with data for spatial objects we assume that the relationship we are modelling is uniform across the study area - that is, the estimated regression parameters are "whole-map" statistics. In many situations this is not necessarily the case, as mapping the residuals (the differences between the observed and predicted data) may reveal. Many different solutions have been proposed for dealing with spatial variation in these relationships. GWR provides means of modelling such relationships. This seminar outlines the characteristics of spatial data and the challenges its use poses for analysis, the ideas underpinning geographically weighted regression and details the process of estimating and interpreting the outputs from GWR models. We finish with a brief survey of current issues in GWR and potential future developments.</itunes:subtitle><itunes:summary>Speaker: Martin Charlton Abstract: Geographically Weighted Regression is a technique for exploratory spatial data analysis. In "normal" regression with data for spatial objects we assume that the relationship we are modelling is uniform across the study area - that is, the estimated regression parameters are "whole-map" statistics. In many situations this is not necessarily the case, as mapping the residuals (the differences between the observed and predicted data) may reveal. Many different solutions have been proposed for dealing with spatial variation in these relationships. GWR provides means of modelling such relationships. This seminar outlines the characteristics of spatial data and the challenges its use poses for analysis, the ideas underpinning geographically weighted regression and details the process of estimating and interpreting the outputs from GWR models. We finish with a brief survey of current issues in GWR and potential future developments.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Cascade Dynamics on Complex Networks</title>
	<link>http://www.hamilton.ie/seminars/videos/45-a_hackett_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/45-a_hackett_hi.mp4</guid>
	<pubDate>Wed, 14 Mar 2012 00:00:45 +0000</pubDate>
	<description>Speaker:

Dr. A. Hackett


Abstract:

A cascade or avalanche is observed when interactions between the components of a system allow an initially localized effect to propagate globally. For example, the malfunction of technological systems like email networks or electrical power grids is often attributable to a cascade of failures triggered by some isolated event. Similarly, the transmission of infectious diseases and the adoption of innovations or cultural fads may induce cascades among people in society. It has been extensively demonstrated that such dynamics depend sensitively on the patterns of interaction laid out in the underlying network of the system. One of the primary goals of the study of complex networks is to provide a sound theoretical basis for this dependence.&#13;&#13;In this seminar we discuss some recent progress in modelling the interaction between network structure and dynamics. Focusing on the phenomenon of high clustering, we present two recently proposed classes of random graphs that achieve non­ zero clustering coefficients. We provide an analytically tractable framework for modeling cascades in both of these classes. This framework is then used to calculate the mean cascade size and the cascade threshold for a broad class of binary­state dynamics.</description>
	<itunes:author>Dr. A. Hackett</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:10:26</itunes:duration>
	<enclosure length="1007809681" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/45-a_hackett_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. A. Hackett Abstract: A cascade or avalanche is observed when interactions between the components of a system allow an initially localized effect to propagate globally. For example, the malfunction of technological systems like email networks or electrical power grids is often attributable to a cascade of failures triggered by some isolated event. Similarly, the transmission of infectious diseases and the adoption of innovations or cultural fads may induce cascades among people in society. It has been extensively demonstrated that such dynamics depend sensitively on the patterns of interaction laid out in the underlying network of the system. One of the primary goals of the study of complex networks is to provide a sound theoretical basis for this dependence. In this seminar we discuss some recent progress in modelling the interaction between network structure and dynamics. Focusing on the phenomenon of high clustering, we present two recently proposed classes of random graphs that achieve non­ zero clustering coefficients. We provide an analytically tractable framework for modeling cascades in both of these classes. This framework is then used to calculate the mean cascade size and the cascade threshold for a broad class of binary­state dynamics.</itunes:subtitle><itunes:summary>Speaker: Dr. A. Hackett Abstract: A cascade or avalanche is observed when interactions between the components of a system allow an initially localized effect to propagate globally. For example, the malfunction of technological systems like email networks or electrical power grids is often attributable to a cascade of failures triggered by some isolated event. Similarly, the transmission of infectious diseases and the adoption of innovations or cultural fads may induce cascades among people in society. It has been extensively demonstrated that such dynamics depend sensitively on the patterns of interaction laid out in the underlying network of the system. One of the primary goals of the study of complex networks is to provide a sound theoretical basis for this dependence. In this seminar we discuss some recent progress in modelling the interaction between network structure and dynamics. Focusing on the phenomenon of high clustering, we present two recently proposed classes of random graphs that achieve non­ zero clustering coefficients. We provide an analytically tractable framework for modeling cascades in both of these classes. This framework is then used to calculate the mean cascade size and the cascade threshold for a broad class of binary­state dynamics.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Exploit prediction to handle mobility in wireless ad hoc networks</title>
	<link>http://www.hamilton.ie/seminars/videos/44-x_li_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/44-x_li_hi.mp4</guid>
	<pubDate>Thu, 01 Mar 2012 00:00:44 +0000</pubDate>
	<description>Speaker:

Dr. X. Li


Abstract:

Node mobility is often a hindering factor of the networking process in wireless ad hoc networks. In this talk, we will introduce our two recent works that address this problem through a prediction approach.&#13;&#13;The first work proposes an AutoRegressive Hello protocol (ARH) for mobile ad hoc networks. A hello protocol is a basic tool for neighborhood discovery. It requires nodes to claim their existence/aliveness by periodic ‘hello’ messages. ARH evolves along with network dynamics by predicting node mobility, and seamlessly tunes itself to obtain ‘hello’ frequency using local knowledge only.&#13;&#13;The second work proposes a distributed Prediction-based Secure and Reliable routing framework (PSR) for wireless body area networks. In this protocol, each node predicts the quality of every incidental link and any change in the neighbor set too, based on an autoregressive model. According to the prediction result, it selects routing next hope and decides whether to enables/disables source authentication.</description>
	<itunes:author>Dr. X. Li</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>48:49</itunes:duration>
	<enclosure length="751301730" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/44-x_li_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. X. Li Abstract: Node mobility is often a hindering factor of the networking process in wireless ad hoc networks. In this talk, we will introduce our two recent works that address this problem through a prediction approach. The first work proposes an AutoRegressive Hello protocol (ARH) for mobile ad hoc networks. A hello protocol is a basic tool for neighborhood discovery. It requires nodes to claim their existence/aliveness by periodic ‘hello’ messages. ARH evolves along with network dynamics by predicting node mobility, and seamlessly tunes itself to obtain ‘hello’ frequency using local knowledge only. The second work proposes a distributed Prediction-based Secure and Reliable routing framework (PSR) for wireless body area networks. In this protocol, each node predicts the quality of every incidental link and any change in the neighbor set too, based on an autoregressive model. According to the prediction result, it selects routing next hope and decides whether to enables/disables source authentication.</itunes:subtitle><itunes:summary>Speaker: Dr. X. Li Abstract: Node mobility is often a hindering factor of the networking process in wireless ad hoc networks. In this talk, we will introduce our two recent works that address this problem through a prediction approach. The first work proposes an AutoRegressive Hello protocol (ARH) for mobile ad hoc networks. A hello protocol is a basic tool for neighborhood discovery. It requires nodes to claim their existence/aliveness by periodic ‘hello’ messages. ARH evolves along with network dynamics by predicting node mobility, and seamlessly tunes itself to obtain ‘hello’ frequency using local knowledge only. The second work proposes a distributed Prediction-based Secure and Reliable routing framework (PSR) for wireless body area networks. In this protocol, each node predicts the quality of every incidental link and any change in the neighbor set too, based on an autoregressive model. According to the prediction result, it selects routing next hope and decides whether to enables/disables source authentication.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Juggler's Exclusion Process</title>
	<link>http://www.hamilton.ie/seminars/videos/43-l_leskela_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/43-l_leskela_hi.mp4</guid>
	<pubDate>Wed, 01 Feb 2012 00:00:43 +0000</pubDate>
	<description>Speaker:

Prof. L. Leskelä


Abstract:

Juggler's exclusion process describes a system of particles on the positive integers where particles drift down to zero at unit speed. After a particle hits zero, it jumps into a randomly chosen unoccupied site. I will model the system as a set-valued Markov process and show that the process is ergodic if the family of jump height distributions is uniformly integrable. In a special case where the particles perform jumps according to an entropy-maximizing fashion, the process reaches its equilibrium in finite nonrandom time, and the equilibrium distribution can be represented as a Gibbs measure conforming to a linear gravitational potential. Time permitting, I will also discuss a recent result which sharply characterizes uniform integrability using the theory of stochastic orders, and allows to interpret the dominating function in Lebesgue's dominated convergence theorem in a natural probabilistic way.&#13;&#13;This talk is based on joint work with Harri Varpanen (Aalto University, Finland) and Matti Vihola (University of Jyväskylä, Finland).</description>
	<itunes:author>Prof. L. Leskelä</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>52:17</itunes:duration>
	<enclosure length="791065015" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/43-l_leskela_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. L. Leskelä Abstract: Juggler's exclusion process describes a system of particles on the positive integers where particles drift down to zero at unit speed. After a particle hits zero, it jumps into a randomly chosen unoccupied site. I will model the system as a set-valued Markov process and show that the process is ergodic if the family of jump height distributions is uniformly integrable. In a special case where the particles perform jumps according to an entropy-maximizing fashion, the process reaches its equilibrium in finite nonrandom time, and the equilibrium distribution can be represented as a Gibbs measure conforming to a linear gravitational potential. Time permitting, I will also discuss a recent result which sharply characterizes uniform integrability using the theory of stochastic orders, and allows to interpret the dominating function in Lebesgue's dominated convergence theorem in a natural probabilistic way. This talk is based on joint work with Harri Varpanen (Aalto University, Finland) and Matti Vihola (University of Jyväskylä, Finland).</itunes:subtitle><itunes:summary>Speaker: Prof. L. Leskelä Abstract: Juggler's exclusion process describes a system of particles on the positive integers where particles drift down to zero at unit speed. After a particle hits zero, it jumps into a randomly chosen unoccupied site. I will model the system as a set-valued Markov process and show that the process is ergodic if the family of jump height distributions is uniformly integrable. In a special case where the particles perform jumps according to an entropy-maximizing fashion, the process reaches its equilibrium in finite nonrandom time, and the equilibrium distribution can be represented as a Gibbs measure conforming to a linear gravitational potential. Time permitting, I will also discuss a recent result which sharply characterizes uniform integrability using the theory of stochastic orders, and allows to interpret the dominating function in Lebesgue's dominated convergence theorem in a natural probabilistic way. This talk is based on joint work with Harri Varpanen (Aalto University, Finland) and Matti Vihola (University of Jyväskylä, Finland).</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Exploratory analysis of human mobility and activities from geo-referenced communication data streams</title>
	<link>http://www.hamilton.ie/seminars/videos/42-a_pozdnoukhov_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/42-a_pozdnoukhov_hi.mp4</guid>
	<pubDate>Thu, 19 Jan 2012 00:00:42 +0000</pubDate>
	<description>Speaker:

Dr. A. Pozdnoukhov


Abstract:

Communication technologies with their very high penetration into society can serve as particularly rich source of information to explore and model evolution of complex social systems.&#13;&#13;This talk presents a framework of methods useful for exploratory analysis, modelling and visualization of data streams available from Twitter, instant messenger services and mobile phone communication logs. We apply probabilistic topic models to uncover the temporal evolution and spatial variability of population’s response to various stimuli such as large scale sportive, political or cultural events. We demonstrate how untypical activity levels can be identified by fitting a non-homogeneous Markov-modulated Poisson processes and exploring spatial variability of the component corresponding to unusual bursts/lulls of human activities.&#13;&#13;Finally, we present initial ideas on the combined use of available data sources and models within a joint large-scale geocomputation framework to uncover a complex interplay of mobility and communication patterns.</description>
	<itunes:author>Dr. A. Pozdnoukhov</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>46:47</itunes:duration>
	<enclosure length="695861220" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/42-a_pozdnoukhov_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. A. Pozdnoukhov Abstract: Communication technologies with their very high penetration into society can serve as particularly rich source of information to explore and model evolution of complex social systems. This talk presents a framework of methods useful for exploratory analysis, modelling and visualization of data streams available from Twitter, instant messenger services and mobile phone communication logs. We apply probabilistic topic models to uncover the temporal evolution and spatial variability of population’s response to various stimuli such as large scale sportive, political or cultural events. We demonstrate how untypical activity levels can be identified by fitting a non-homogeneous Markov-modulated Poisson processes and exploring spatial variability of the component corresponding to unusual bursts/lulls of human activities. Finally, we present initial ideas on the combined use of available data sources and models within a joint large-scale geocomputation framework to uncover a complex interplay of mobility and communication patterns.</itunes:subtitle><itunes:summary>Speaker: Dr. A. Pozdnoukhov Abstract: Communication technologies with their very high penetration into society can serve as particularly rich source of information to explore and model evolution of complex social systems. This talk presents a framework of methods useful for exploratory analysis, modelling and visualization of data streams available from Twitter, instant messenger services and mobile phone communication logs. We apply probabilistic topic models to uncover the temporal evolution and spatial variability of population’s response to various stimuli such as large scale sportive, political or cultural events. We demonstrate how untypical activity levels can be identified by fitting a non-homogeneous Markov-modulated Poisson processes and exploring spatial variability of the component corresponding to unusual bursts/lulls of human activities. Finally, we present initial ideas on the combined use of available data sources and models within a joint large-scale geocomputation framework to uncover a complex interplay of mobility and communication patterns.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Diagonal Stability and Completely Positive Matrices</title>
	<link>http://www.hamilton.ie/seminars/videos/41-a_berman_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/41-a_berman_hi.mp4</guid>
	<pubDate>Mon, 17 Oct 2011 00:00:41 +0100</pubDate>
	<description>Speaker:

Prof. A. Berman


Abstract:

In this paper a general notion of common diagonal Lyapunov matrix is formulated for a collection of n×n matrices A_1,...,A_s and polyhedral cones k_1,...,k_s in R^n. Necessary and sufficient conditions are derived for the existence of a common diagonal Lyapunov matrix in this setting.&#13;&#13;This talk is based on joint work with Christopher King &amp; Robert Shorten.</description>
	<itunes:author>Prof. A. Berman</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>39:33</itunes:duration>
	<enclosure length="613945468" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/41-a_berman_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. A. Berman Abstract: In this paper a general notion of common diagonal Lyapunov matrix is formulated for a collection of n×n matrices A_1,...,A_s and polyhedral cones k_1,...,k_s in R^n. Necessary and sufficient conditions are derived for the existence of a common diagonal Lyapunov matrix in this setting. This talk is based on joint work with Christopher King &amp; Robert Shorten.</itunes:subtitle><itunes:summary>Speaker: Prof. A. Berman Abstract: In this paper a general notion of common diagonal Lyapunov matrix is formulated for a collection of n×n matrices A_1,...,A_s and polyhedral cones k_1,...,k_s in R^n. Necessary and sufficient conditions are derived for the existence of a common diagonal Lyapunov matrix in this setting. This talk is based on joint work with Christopher King &amp; Robert Shorten.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Load balancing for Markov chains</title>
	<link>http://www.hamilton.ie/seminars/videos/40-s_kirkland_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/40-s_kirkland_hi.mp4</guid>
	<pubDate>Mon, 17 Oct 2011 00:00:40 +0100</pubDate>
	<description>Speaker:

Prof. S. Kirkland


Abstract:

A square matrix T is called stochastic if its entries are nonnegative and its row sums are all equal to one. Stochastic matrices are the centrepiece of the theory of discrete-time, time homogenous Markov chains on a finite state space. If some power of the stochastic matrix T has all positive entries, then there is a unique left eigenvector for T, known as the stationary distribution, to which the iterates of the Markov chain converge, regardless of what the initial distribution for the chain is. Thus, in this setting, the stationary distribution can be thought of as giving the probability that the chain is in a particular state over the long run.&#13;&#13;In many applications, the stochastic matrix under consideration is equipped with an underlying combinatorial structure, which can be recorded in a directed graph. Given a stochastic matrix T, how are the entries in the stationary distribution influenced by the structure of the directed graph associated with T? In this talk we investigate a question of that type by finding the minimum value of the maximum entry in the stationary distribution for T, as T ranges over the set of stochastic matrices with a given directed graph. The solution involves techniques from matrix theory, graph theory, and nonlinear programming.</description>
	<itunes:author>Prof. S. Kirkland</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>39:18</itunes:duration>
	<enclosure length="614831587" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/40-s_kirkland_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. S. Kirkland Abstract: A square matrix T is called stochastic if its entries are nonnegative and its row sums are all equal to one. Stochastic matrices are the centrepiece of the theory of discrete-time, time homogenous Markov chains on a finite state space. If some power of the stochastic matrix T has all positive entries, then there is a unique left eigenvector for T, known as the stationary distribution, to which the iterates of the Markov chain converge, regardless of what the initial distribution for the chain is. Thus, in this setting, the stationary distribution can be thought of as giving the probability that the chain is in a particular state over the long run. In many applications, the stochastic matrix under consideration is equipped with an underlying combinatorial structure, which can be recorded in a directed graph. Given a stochastic matrix T, how are the entries in the stationary distribution influenced by the structure of the directed graph associated with T? In this talk we investigate a question of that type by finding the minimum value of the maximum entry in the stationary distribution for T, as T ranges over the set of stochastic matrices with a given directed graph. The solution involves techniques from matrix theory, graph theory, and nonlinear programming.</itunes:subtitle><itunes:summary>Speaker: Prof. S. Kirkland Abstract: A square matrix T is called stochastic if its entries are nonnegative and its row sums are all equal to one. Stochastic matrices are the centrepiece of the theory of discrete-time, time homogenous Markov chains on a finite state space. If some power of the stochastic matrix T has all positive entries, then there is a unique left eigenvector for T, known as the stationary distribution, to which the iterates of the Markov chain converge, regardless of what the initial distribution for the chain is. Thus, in this setting, the stationary distribution can be thought of as giving the probability that the chain is in a particular state over the long run. In many applications, the stochastic matrix under consideration is equipped with an underlying combinatorial structure, which can be recorded in a directed graph. Given a stochastic matrix T, how are the entries in the stationary distribution influenced by the structure of the directed graph associated with T? In this talk we investigate a question of that type by finding the minimum value of the maximum entry in the stationary distribution for T, as T ranges over the set of stochastic matrices with a given directed graph. The solution involves techniques from matrix theory, graph theory, and nonlinear programming.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>The Symmetric Nonnegative Inverse Eigenvalue Problem</title>
	<link>http://www.hamilton.ie/seminars/videos/39-h_smigoc_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/39-h_smigoc_hi.mp4</guid>
	<pubDate>Mon, 17 Oct 2011 00:00:39 +0100</pubDate>
	<description>Speaker:

Dr. H. Šmigoc


Abstract:

The question of which lists of complex numbers are the spectra of nonnegative matrices, is known as the nonnegative inverse eigenvalue problem, and the same question posed for symmetric nonnegative matrices is called the symmetric nonnegative inverse eigenvalue problem. In the talk we will present an overview of some recent results on the symmetric nonnegative inverse eigenvalue problem.&#13;&#13;Joint work with T. J. Laffey.</description>
	<itunes:author>Dr. H. Šmigoc</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>31:54</itunes:duration>
	<enclosure length="488054898" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/39-h_smigoc_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. H. Šmigoc Abstract: The question of which lists of complex numbers are the spectra of nonnegative matrices, is known as the nonnegative inverse eigenvalue problem, and the same question posed for symmetric nonnegative matrices is called the symmetric nonnegative inverse eigenvalue problem. In the talk we will present an overview of some recent results on the symmetric nonnegative inverse eigenvalue problem. Joint work with T. J. Laffey.</itunes:subtitle><itunes:summary>Speaker: Dr. H. Šmigoc Abstract: The question of which lists of complex numbers are the spectra of nonnegative matrices, is known as the nonnegative inverse eigenvalue problem, and the same question posed for symmetric nonnegative matrices is called the symmetric nonnegative inverse eigenvalue problem. In the talk we will present an overview of some recent results on the symmetric nonnegative inverse eigenvalue problem. Joint work with T. J. Laffey.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>On the Block Numerical Range of Operators in Banach Spaces</title>
	<link>http://www.hamilton.ie/seminars/videos/38-k_foerster_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/38-k_foerster_hi.mp4</guid>
	<pubDate>Mon, 17 Oct 2011 00:00:38 +0100</pubDate>
	<description>Speaker:

Prof. K.-H. Förster


Abstract:

In this talk following topics will be discussed:&#13;- The Numerical Range of Operators in Banach Spaces.&#13;- The Block Numerical Range of Operators.&#13;- The Block Numerical Range of Operator Functions.&#13;- The Block Numerical Range of m-monic Perron-Frobenius-Matrix-Polynomials.</description>
	<itunes:author>Prof. K.-H. Förster</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>37:52</itunes:duration>
	<enclosure length="572930666" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/38-k_foerster_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. K.-H. Förster Abstract: In this talk following topics will be discussed: - The Numerical Range of Operators in Banach Spaces. - The Block Numerical Range of Operators. - The Block Numerical Range of Operator Functions. - The Block Numerical Range of m-monic Perron-Frobenius-Matrix-Polynomials.</itunes:subtitle><itunes:summary>Speaker: Prof. K.-H. Förster Abstract: In this talk following topics will be discussed: - The Numerical Range of Operators in Banach Spaces. - The Block Numerical Range of Operators. - The Block Numerical Range of Operator Functions. - The Block Numerical Range of m-monic Perron-Frobenius-Matrix-Polynomials.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Essentially Negative News About Positive Systems</title>
	<link>http://www.hamilton.ie/seminars/videos/37-p_colaneri_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/37-p_colaneri_hi.mp4</guid>
	<pubDate>Mon, 17 Oct 2011 00:00:37 +0100</pubDate>
	<description>Speaker:

Prof. P. Colaneri


Abstract:

In this paper the discretisation of switched and non-switched linear positive systems using Padé approximations is considered. Padé approximations to the matrix exponential are sometimes used by control engineers for discretising continuous time systems and for control system design. We observe that this method of approximation is not suited for the discretisation of positive dynamic systems, for two key reasons. First, certain types of Lyapunov stability are not, in general, preserved. Secondly, and more seriously, positivity need not be preserved, even when stability is. Finally we present an alternative approximation to the matrix exponential which preserves positivity, and linear and quadratic stability.&#13;&#13;This talk is based on joint work with Steve Kirkland, Annalisa Zappavigna &amp; Robert Shorten</description>
	<itunes:author>Prof. P. Colaneri</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>46:25</itunes:duration>
	<enclosure length="699925094" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/37-p_colaneri_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. P. Colaneri Abstract: In this paper the discretisation of switched and non-switched linear positive systems using Padé approximations is considered. Padé approximations to the matrix exponential are sometimes used by control engineers for discretising continuous time systems and for control system design. We observe that this method of approximation is not suited for the discretisation of positive dynamic systems, for two key reasons. First, certain types of Lyapunov stability are not, in general, preserved. Secondly, and more seriously, positivity need not be preserved, even when stability is. Finally we present an alternative approximation to the matrix exponential which preserves positivity, and linear and quadratic stability. This talk is based on joint work with Steve Kirkland, Annalisa Zappavigna &amp; Robert Shorten</itunes:subtitle><itunes:summary>Speaker: Prof. P. Colaneri Abstract: In this paper the discretisation of switched and non-switched linear positive systems using Padé approximations is considered. Padé approximations to the matrix exponential are sometimes used by control engineers for discretising continuous time systems and for control system design. We observe that this method of approximation is not suited for the discretisation of positive dynamic systems, for two key reasons. First, certain types of Lyapunov stability are not, in general, preserved. Secondly, and more seriously, positivity need not be preserved, even when stability is. Finally we present an alternative approximation to the matrix exponential which preserves positivity, and linear and quadratic stability. This talk is based on joint work with Steve Kirkland, Annalisa Zappavigna &amp; Robert Shorten</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Some relationships between formal power series and nonnegative matrices</title>
	<link>http://www.hamilton.ie/seminars/videos/36-t_laffey_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/36-t_laffey_hi.mp4</guid>
	<pubDate>Mon, 17 Oct 2011 00:00:36 +0100</pubDate>
	<description>Speaker:

Prof. T. Laffey


Abstract:

Let σ = (λ_1,...,λ_n) be a list of complex numbers which we aim to realize constructively as the spectrum of a nonnegative matrix. Most constructions available in the literature rely on building matrices related to companion matrices from the polynomial f(x) = (x-λ_1)...(x-λ_n). Kim, Ormes and Roush (JAMS 2000) showed how certain formal power series related to f(x), which have all coefficients, other than the leading one, negative, can be used in finding constructions over the semiring of polynomials with nonnegative coefficients, while, in joint work, Šmigoc and this author (ELA 17 (2008) 333-342, LAMA 58 (2010), 1053-1059) have used polynomials having all their non-leading coefficients negative, to find realizations when σ has not more than two entries with positive real parts. Beginning with the observation that if λ_1,...,λ_n are all positive, then the Taylor expansion of the nth root of F(t) = (1-λ_1t)...(1-λ_nt) about t=0 has all its non-leading coefficients negative, we present a number of results on the negativity of the coefficients of power series and their applications to nonnegative matrices.</description>
	<itunes:author>Prof. T. Laffey</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>44:29</itunes:duration>
	<enclosure length="668463999" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/36-t_laffey_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. T. Laffey Abstract: Let σ = (λ_1,...,λ_n) be a list of complex numbers which we aim to realize constructively as the spectrum of a nonnegative matrix. Most constructions available in the literature rely on building matrices related to companion matrices from the polynomial f(x) = (x-λ_1)...(x-λ_n). Kim, Ormes and Roush (JAMS 2000) showed how certain formal power series related to f(x), which have all coefficients, other than the leading one, negative, can be used in finding constructions over the semiring of polynomials with nonnegative coefficients, while, in joint work, Šmigoc and this author (ELA 17 (2008) 333-342, LAMA 58 (2010), 1053-1059) have used polynomials having all their non-leading coefficients negative, to find realizations when σ has not more than two entries with positive real parts. Beginning with the observation that if λ_1,...,λ_n are all positive, then the Taylor expansion of the nth root of F(t) = (1-λ_1t)...(1-λ_nt) about t=0 has all its non-leading coefficients negative, we present a number of results on the negativity of the coefficients of power series and their applications to nonnegative matrices.</itunes:subtitle><itunes:summary>Speaker: Prof. T. Laffey Abstract: Let σ = (λ_1,...,λ_n) be a list of complex numbers which we aim to realize constructively as the spectrum of a nonnegative matrix. Most constructions available in the literature rely on building matrices related to companion matrices from the polynomial f(x) = (x-λ_1)...(x-λ_n). Kim, Ormes and Roush (JAMS 2000) showed how certain formal power series related to f(x), which have all coefficients, other than the leading one, negative, can be used in finding constructions over the semiring of polynomials with nonnegative coefficients, while, in joint work, Šmigoc and this author (ELA 17 (2008) 333-342, LAMA 58 (2010), 1053-1059) have used polynomials having all their non-leading coefficients negative, to find realizations when σ has not more than two entries with positive real parts. Beginning with the observation that if λ_1,...,λ_n are all positive, then the Taylor expansion of the nth root of F(t) = (1-λ_1t)...(1-λ_nt) about t=0 has all its non-leading coefficients negative, we present a number of results on the negativity of the coefficients of power series and their applications to nonnegative matrices.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Maximal exponents of polyhedral cones</title>
	<link>http://www.hamilton.ie/seminars/videos/35-r_loewy_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/35-r_loewy_hi.mp4</guid>
	<pubDate>Mon, 17 Oct 2011 00:00:35 +0100</pubDate>
	<description>Speaker:

Prof. R. Loewy


Abstract:

Let K be a proper (i.e., closed, pointed, full and convex) cone in R^n. We consider A∈R^(n×n) which is K-primitive, that is, there exists a positive integer l such that A^l.x ∈ int K for every 0≠x∈K. The smallest such l is called the exponent of A, denoted by γ(A).&#13;&#13;For a polyhedral cone K, the maximum value of γ(A), taken over all K-primitive matrices A, is denoted by γ(K). Our main result is that for any positive integers m,n, 3 ≤ n ≤ m, the maximum value of γ(K), as K runs through all n-dimensional polyhedral cones with m extreme rays, equals&#13;&#13;( n - 1 )( m - 1 ) + ½( 1 + (-1)^{(n-1)m} ).&#13;&#13;We will consider various uniqueness issues related to the main result as well as its connections to known results.&#13;&#13;This talk is based on a joint work with Micha Perles and Bit-Shun Tam.</description>
	<itunes:author>Prof. R. Loewy</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>48:32</itunes:duration>
	<enclosure length="737670619" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/35-r_loewy_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. R. Loewy Abstract: Let K be a proper (i.e., closed, pointed, full and convex) cone in R^n. We consider A∈R^(n×n) which is K-primitive, that is, there exists a positive integer l such that A^l.x ∈ int K for every 0≠x∈K. The smallest such l is called the exponent of A, denoted by γ(A). For a polyhedral cone K, the maximum value of γ(A), taken over all K-primitive matrices A, is denoted by γ(K). Our main result is that for any positive integers m,n, 3 ≤ n ≤ m, the maximum value of γ(K), as K runs through all n-dimensional polyhedral cones with m extreme rays, equals ( n - 1 )( m - 1 ) + ½( 1 + (-1)^{(n-1)m} ). We will consider various uniqueness issues related to the main result as well as its connections to known results. This talk is based on a joint work with Micha Perles and Bit-Shun Tam.</itunes:subtitle><itunes:summary>Speaker: Prof. R. Loewy Abstract: Let K be a proper (i.e., closed, pointed, full and convex) cone in R^n. We consider A∈R^(n×n) which is K-primitive, that is, there exists a positive integer l such that A^l.x ∈ int K for every 0≠x∈K. The smallest such l is called the exponent of A, denoted by γ(A). For a polyhedral cone K, the maximum value of γ(A), taken over all K-primitive matrices A, is denoted by γ(K). Our main result is that for any positive integers m,n, 3 ≤ n ≤ m, the maximum value of γ(K), as K runs through all n-dimensional polyhedral cones with m extreme rays, equals ( n - 1 )( m - 1 ) + ½( 1 + (-1)^{(n-1)m} ). We will consider various uniqueness issues related to the main result as well as its connections to known results. This talk is based on a joint work with Micha Perles and Bit-Shun Tam.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>From nonnegative matrices to nonnegative tensors</title>
	<link>http://www.hamilton.ie/seminars/videos/34-s_friedland_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/34-s_friedland_hi.mp4</guid>
	<pubDate>Mon, 17 Oct 2011 00:00:34 +0100</pubDate>
	<description>Speaker:

Prof. S. Friedland


Abstract:

In this talk we will discuss a number of generalizations of results on nonnegative matrices to nonnegative tensors as: irreducibility and weak irreducibility, Perron-Frobenius theorem, Collatz-Wielandt characterization, Kingman's inequality, Karlin-Ost and Friedland theorems, tropical spectral radius, diagonal scaling, Friedland-Karlin inequality, nonnegative multilinear forms.</description>
	<itunes:author>Prof. S. Friedland</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>43:56</itunes:duration>
	<enclosure length="670355545" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/34-s_friedland_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. S. Friedland Abstract: In this talk we will discuss a number of generalizations of results on nonnegative matrices to nonnegative tensors as: irreducibility and weak irreducibility, Perron-Frobenius theorem, Collatz-Wielandt characterization, Kingman's inequality, Karlin-Ost and Friedland theorems, tropical spectral radius, diagonal scaling, Friedland-Karlin inequality, nonnegative multilinear forms.</itunes:subtitle><itunes:summary>Speaker: Prof. S. Friedland Abstract: In this talk we will discuss a number of generalizations of results on nonnegative matrices to nonnegative tensors as: irreducibility and weak irreducibility, Perron-Frobenius theorem, Collatz-Wielandt characterization, Kingman's inequality, Karlin-Ost and Friedland theorems, tropical spectral radius, diagonal scaling, Friedland-Karlin inequality, nonnegative multilinear forms.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Fundamental delay bounds in peer-to-peer chunk-based real-time streaming systems</title>
	<link>http://www.hamilton.ie/seminars/videos/33-g_bianchi_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/33-g_bianchi_hi.mp4</guid>
	<pubDate>Thu, 11 Aug 2011 00:00:33 +0100</pubDate>
	<description>Speaker:

Prof. G. Bianchi


Abstract:

In this talk we address the following question:  What is the minimum theoretical delay performance achievable by an overlay peer-to-peer streaming system where the streamed content is subdivided into chunks? We first start to show that, when posed for chunk-based systems, and as a consequence of the store-and-forward way in which chunks are delivered across the network, this question has a fundamentally different answer with respect to the case of systems where the streamed content is distributed through one or more flows (sub-streams).  We then proceed by defining a convenient performance metric, called "stream diffusion metric", which is directly related to the end-to-end minimum delay achievable in a P2P streaming network, but which allows us to circumvent the complexity emerging when directly dealing with delay.  We further derive a performance bound for such metric, and we show how this bound relates to two fundamental parameters:  the upload bandwidth available at each node, and the number of neighbors a node may deliver chunks to.  Quite interestingly, in this bound, n-step Fibonacci sequences play a key role, and appear to set the laws that characterize the optimal operation of chunk-based systems.  Finally, we constructively show by means of which topologies and system operation this bound is attainable.</description>
	<itunes:author>Prof. G. Bianchi</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:15:59</itunes:duration>
	<enclosure length="1144032858" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/33-g_bianchi_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. G. Bianchi Abstract: In this talk we address the following question: What is the minimum theoretical delay performance achievable by an overlay peer-to-peer streaming system where the streamed content is subdivided into chunks? We first start to show that, when posed for chunk-based systems, and as a consequence of the store-and-forward way in which chunks are delivered across the network, this question has a fundamentally different answer with respect to the case of systems where the streamed content is distributed through one or more flows (sub-streams). We then proceed by defining a convenient performance metric, called "stream diffusion metric", which is directly related to the end-to-end minimum delay achievable in a P2P streaming network, but which allows us to circumvent the complexity emerging when directly dealing with delay. We further derive a performance bound for such metric, and we show how this bound relates to two fundamental parameters: the upload bandwidth available at each node, and the number of neighbors a node may deliver chunks to. Quite interestingly, in this bound, n-step Fibonacci sequences play a key role, and appear to set the laws that characterize the optimal operation of chunk-based systems. Finally, we constructively show by means of which topologies and system operation this bound is attainable.</itunes:subtitle><itunes:summary>Speaker: Prof. G. Bianchi Abstract: In this talk we address the following question: What is the minimum theoretical delay performance achievable by an overlay peer-to-peer streaming system where the streamed content is subdivided into chunks? We first start to show that, when posed for chunk-based systems, and as a consequence of the store-and-forward way in which chunks are delivered across the network, this question has a fundamentally different answer with respect to the case of systems where the streamed content is distributed through one or more flows (sub-streams). We then proceed by defining a convenient performance metric, called "stream diffusion metric", which is directly related to the end-to-end minimum delay achievable in a P2P streaming network, but which allows us to circumvent the complexity emerging when directly dealing with delay. We further derive a performance bound for such metric, and we show how this bound relates to two fundamental parameters: the upload bandwidth available at each node, and the number of neighbors a node may deliver chunks to. Quite interestingly, in this bound, n-step Fibonacci sequences play a key role, and appear to set the laws that characterize the optimal operation of chunk-based systems. Finally, we constructively show by means of which topologies and system operation this bound is attainable.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Robot Navigation and Mapping</title>
	<link>http://www.hamilton.ie/seminars/videos/32-j_leonard_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/32-j_leonard_hi.mp4</guid>
	<pubDate>Tue, 09 Aug 2011 00:00:32 +0100</pubDate>
	<description>Speaker:

Prof. J. Leonard


Abstract:

This talk will have two parts. In part one, we will review recent progress in mobile robotics, focusing on the problems of simultaneous mapping and localization (SLAM) and cooperative navigation of mobile sensor networks. The problem of SLAM is stated as follows: starting from an initial position, a mobile robot travels through a sequence of positions and obtains a set of sensor measurements at each position. The goal is for the mobile robot to process the sensor data to compute an estimate of its position while concurrently building a map of the environment. We will present SLAM results for several scenarios including land robot mapping of large-scale environments and undersea mapping using optical imaging sensors. We will also describe work on cooperative navigation for networks of autonomous underwater vehicles (AUVs) and autonomous sea-surface vehicles (ASVs).&#13;&#13;In the second part of the talk, we will provide an overview of MIT's entry in the 2007 DARPA Urban Challenge. The goal of this effort was to produce a car that can drive autonomously in traffic. Our team developed a novel strategy for using a large number of many inexpensive sensors, mounted on the vehicle periphery, and calibrated with a new cross-modal calibration technique. Lidar, camera, and radar data streams are processed using an innovative, locally smooth state representation that provides robust perception for real-time autonomous control. A resilient planning and control architecture has been developed for driving in traffic, comprised of an innovative combination of well-proven algorithms for mission planning, situational planning, situational interpretation, and trajectory control. The performance of our system in the NQE and race events will be reviewed, and ideas for future research will be discussed.&#13;&#13;For more information, see http://grandchallenge.mit.edu&#13;&#13;Joint work with Seth Teller, Michael Bosse, Paul Newman, Ryan Eustice, Matthew Walter, Hanumant Singh, Henrik Schmidt, Mike Benjamin, Alexander Bahr, Joseph Curcio, Andrew Patrikalakis, Matt Antone, David Barrett, Mitch Berger, Ryan Buckley, Stefan Campbell, Alexander Epstein, Gaston Fiore, Luke Fletcher, Emilio Frazzoli, Robert Galejs, Jonathan How, Albert Huang, Karl Iagnemma, Troy Jones, Sertac Karaman, Olivier Koch, Siddhartha Krishnamurthy, Yoshi Kuwata, Keoni Maheloni, David Moore, Katy Moyer, Edwin Olson, Andrew Patrikalakis, Steve Peters, Stephen Proulx, Nicholas Roy, Daniela Rus, Chris Sanders, Seth Teller, Justin Teo, Robert Truax, Matthew Walter, and Jonathan Williams.</description>
	<itunes:author>Prof. J. Leonard</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:05:33</itunes:duration>
	<enclosure length="947575588" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/32-j_leonard_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. J. Leonard Abstract: This talk will have two parts. In part one, we will review recent progress in mobile robotics, focusing on the problems of simultaneous mapping and localization (SLAM) and cooperative navigation of mobile sensor networks. The problem of SLAM is stated as follows: starting from an initial position, a mobile robot travels through a sequence of positions and obtains a set of sensor measurements at each position. The goal is for the mobile robot to process the sensor data to compute an estimate of its position while concurrently building a map of the environment. We will present SLAM results for several scenarios including land robot mapping of large-scale environments and undersea mapping using optical imaging sensors. We will also describe work on cooperative navigation for networks of autonomous underwater vehicles (AUVs) and autonomous sea-surface vehicles (ASVs). In the second part of the talk, we will provide an overview of MIT's entry in the 2007 DARPA Urban Challenge. The goal of this effort was to produce a car that can drive autonomously in traffic. Our team developed a novel strategy for using a large number of many inexpensive sensors, mounted on the vehicle periphery, and calibrated with a new cross-modal calibration technique. Lidar, camera, and radar data streams are processed using an innovative, locally smooth state representation that provides robust perception for real-time autonomous control. A resilient planning and control architecture has been developed for driving in traffic, comprised of an innovative combination of well-proven algorithms for mission planning, situational planning, situational interpretation, and trajectory control. The performance of our system in the NQE and race events will be reviewed, and ideas for future research will be discussed. For more information, see http://grandchallenge.mit.edu Joint work with Seth Teller, Michael Bosse, Paul Newman, Ryan Eustice, Matthew Walter, Hanumant Singh, Henrik Schmidt, Mike Benjamin, Alexander Bahr, Joseph Curcio, Andrew Patrikalakis, Matt Antone, David Barrett, Mitch Berger, Ryan Buckley, Stefan Campbell, Alexander Epstein, Gaston Fiore, Luke Fletcher, Emilio Frazzoli, Robert Galejs, Jonathan How, Albert Huang, Karl Iagnemma, Troy Jones, Sertac Karaman, Olivier Koch, Siddhartha Krishnamurthy, Yoshi Kuwata, Keoni Maheloni, David Moore, Katy Moyer, Edwin Olson, Andrew Patrikalakis, Steve Peters, Stephen Proulx, Nicholas Roy, Daniela Rus, Chris Sanders, Seth Teller, Justin Teo, Robert Truax, Matthew Walter, and Jonathan Williams.</itunes:subtitle><itunes:summary>Speaker: Prof. J. Leonard Abstract: This talk will have two parts. In part one, we will review recent progress in mobile robotics, focusing on the problems of simultaneous mapping and localization (SLAM) and cooperative navigation of mobile sensor networks. The problem of SLAM is stated as follows: starting from an initial position, a mobile robot travels through a sequence of positions and obtains a set of sensor measurements at each position. The goal is for the mobile robot to process the sensor data to compute an estimate of its position while concurrently building a map of the environment. We will present SLAM results for several scenarios including land robot mapping of large-scale environments and undersea mapping using optical imaging sensors. We will also describe work on cooperative navigation for networks of autonomous underwater vehicles (AUVs) and autonomous sea-surface vehicles (ASVs). In the second part of the talk, we will provide an overview of MIT's entry in the 2007 DARPA Urban Challenge. The goal of this effort was to produce a car that can drive autonomously in traffic. Our team developed a novel strategy for using a large number of many inexpensive sensors, mounted on the vehicle periphery, and calibrated with a new cross-modal calibration technique. Lidar, camera, and radar data streams are processed using an innovative, locally smooth state representation that provides robust perception for real-time autonomous control. A resilient planning and control architecture has been developed for driving in traffic, comprised of an innovative combination of well-proven algorithms for mission planning, situational planning, situational interpretation, and trajectory control. The performance of our system in the NQE and race events will be reviewed, and ideas for future research will be discussed. For more information, see http://grandchallenge.mit.edu Joint work with Seth Teller, Michael Bosse, Paul Newman, Ryan Eustice, Matthew Walter, Hanumant Singh, Henrik Schmidt, Mike Benjamin, Alexander Bahr, Joseph Curcio, Andrew Patrikalakis, Matt Antone, David Barrett, Mitch Berger, Ryan Buckley, Stefan Campbell, Alexander Epstein, Gaston Fiore, Luke Fletcher, Emilio Frazzoli, Robert Galejs, Jonathan How, Albert Huang, Karl Iagnemma, Troy Jones, Sertac Karaman, Olivier Koch, Siddhartha Krishnamurthy, Yoshi Kuwata, Keoni Maheloni, David Moore, Katy Moyer, Edwin Olson, Andrew Patrikalakis, Steve Peters, Stephen Proulx, Nicholas Roy, Daniela Rus, Chris Sanders, Seth Teller, Justin Teo, Robert Truax, Matthew Walter, and Jonathan Williams.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Humanoid Robot Soccer 101</title>
	<link>http://www.hamilton.ie/seminars/videos/31-t_roefer_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/31-t_roefer_hi.mp4</guid>
	<pubDate>Tue, 09 Aug 2011 00:00:31 +0100</pubDate>
	<description>Speaker:

Dr. T. Röfer


Abstract:

Building the software for a competitive robot soccer team is a challenging task. The robots have to perceive their environment, estimate where they and the other relevant object are located on the field, decide what to do, and execute those decisions. All this has to happen in real-time, on-board the robots, with limited computing power, and not only for a single robot, but for the whole team. The lecture will give a survey of these tasks, using the methods used by the team B-Human in the RoboCup Standard Platform League as an example.</description>
	<itunes:author>Dr. T. Röfer</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:18:19</itunes:duration>
	<enclosure length="1157269011" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/31-t_roefer_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. T. Röfer Abstract: Building the software for a competitive robot soccer team is a challenging task. The robots have to perceive their environment, estimate where they and the other relevant object are located on the field, decide what to do, and execute those decisions. All this has to happen in real-time, on-board the robots, with limited computing power, and not only for a single robot, but for the whole team. The lecture will give a survey of these tasks, using the methods used by the team B-Human in the RoboCup Standard Platform League as an example.</itunes:subtitle><itunes:summary>Speaker: Dr. T. Röfer Abstract: Building the software for a competitive robot soccer team is a challenging task. The robots have to perceive their environment, estimate where they and the other relevant object are located on the field, decide what to do, and execute those decisions. All this has to happen in real-time, on-board the robots, with limited computing power, and not only for a single robot, but for the whole team. The lecture will give a survey of these tasks, using the methods used by the team B-Human in the RoboCup Standard Platform League as an example.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>An Introduction to R</title>
	<link>http://www.hamilton.ie/seminars/videos/30-c_walz_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/30-c_walz_hi.mp4</guid>
	<pubDate>Fri, 03 Jun 2011 00:00:30 +0100</pubDate>
	<description>Speaker:

C. Walz


Abstract:

A first introduction to R.</description>
	<itunes:author>C. Walz</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>59:01</itunes:duration>
	<enclosure length="868223392" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/30-c_walz_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: C. Walz Abstract: A first introduction to R.</itunes:subtitle><itunes:summary>Speaker: C. Walz Abstract: A first introduction to R.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Lifecycle of HIV-infected cells</title>
	<link>http://www.hamilton.ie/seminars/videos/29-j_petravic_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/29-j_petravic_hi.mp4</guid>
	<pubDate>Sat, 05 Mar 2011 00:00:29 +0000</pubDate>
	<description>Speaker:

Dr. J. Petravic


Abstract:

In HIV dynamics models, it is commonly assumed that HIV-infected cells all have the same viral production and death rates.  We explored the dynamics of viral production and death in vitro to determine the validity of this assumption.  We infected human cells with HIV-1 constructs that expressed enhanced green fluorescent protein (EGFP) and determined the amount of viral proteins produced by infected cells.  Analysis of the flow cytometry data showed that the productively infected cells exhibited a broad, approximately log-normal distribution of viral protein content (spanning several orders of magnitude) that changed its shape and mean fluorescence intensity over time, and that population death rate apparently did not correlate with its mean EGFP content.&#13;&#13;We assumed that the observed EGFP fluorescence level represented the balance of protein production and degradation.  In our model of the infected cell population, EGFP fluorescence distribution at any time depended on probability distributions of four independent parameters: time to the start of protein production, protein production and degradation rates, and the lifespan of infected cells.  After exploration of possible combinations of parameter distributions, we found that a distribution of protein production rates that is negatively correlated to the times to start of production of viral can explain the observed time course of the distribution of EGFP intensity.</description>
	<itunes:author>Dr. J. Petravic</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>54:43</itunes:duration>
	<enclosure length="819217616" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/29-j_petravic_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. J. Petravic Abstract: In HIV dynamics models, it is commonly assumed that HIV-infected cells all have the same viral production and death rates. We explored the dynamics of viral production and death in vitro to determine the validity of this assumption. We infected human cells with HIV-1 constructs that expressed enhanced green fluorescent protein (EGFP) and determined the amount of viral proteins produced by infected cells. Analysis of the flow cytometry data showed that the productively infected cells exhibited a broad, approximately log-normal distribution of viral protein content (spanning several orders of magnitude) that changed its shape and mean fluorescence intensity over time, and that population death rate apparently did not correlate with its mean EGFP content. We assumed that the observed EGFP fluorescence level represented the balance of protein production and degradation. In our model of the infected cell population, EGFP fluorescence distribution at any time depended on probability distributions of four independent parameters: time to the start of protein production, protein production and degradation rates, and the lifespan of infected cells. After exploration of possible combinations of parameter distributions, we found that a distribution of protein production rates that is negatively correlated to the times to start of production of viral can explain the observed time course of the distribution of EGFP intensity.</itunes:subtitle><itunes:summary>Speaker: Dr. J. Petravic Abstract: In HIV dynamics models, it is commonly assumed that HIV-infected cells all have the same viral production and death rates. We explored the dynamics of viral production and death in vitro to determine the validity of this assumption. We infected human cells with HIV-1 constructs that expressed enhanced green fluorescent protein (EGFP) and determined the amount of viral proteins produced by infected cells. Analysis of the flow cytometry data showed that the productively infected cells exhibited a broad, approximately log-normal distribution of viral protein content (spanning several orders of magnitude) that changed its shape and mean fluorescence intensity over time, and that population death rate apparently did not correlate with its mean EGFP content. We assumed that the observed EGFP fluorescence level represented the balance of protein production and degradation. In our model of the infected cell population, EGFP fluorescence distribution at any time depended on probability distributions of four independent parameters: time to the start of protein production, protein production and degradation rates, and the lifespan of infected cells. After exploration of possible combinations of parameter distributions, we found that a distribution of protein production rates that is negatively correlated to the times to start of production of viral can explain the observed time course of the distribution of EGFP intensity.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Advances in non-linear distortion methods of synthesis and processing of musical signals</title>
	<link>http://www.hamilton.ie/seminars/videos/28-v_lazzarini_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/28-v_lazzarini_hi.mp4</guid>
	<pubDate>Wed, 23 Mar 2011 00:00:28 +0000</pubDate>
	<description>Speaker:

Dr. V. Lazzarini


Abstract:

Non-linear distortion methods form a set of elegant and computationally economic methods of synthesis and processing for musical applications.  Among these, we find the famous Frequency Modulation synthesis, as developed by Chowning and made popular by Yamaha.  In addition, various other techniques, including Discrete Summation Formulae, Waveshaping and Phase distortion, can be cast in the same group (and often be given alternative interpretations) of non-linear distortion methods.  Research in the area has been very limited since the mid nineties, until a recent series of developments spurred new interest in these ideas.  In this talk, I will first introduce briefly the principles of non-linear distortion, providing an overview of the area.  I will then follow this with a tour of recent work, which will include adaptive methods, virtual analogue models and analysis-synthesis applications.</description>
	<itunes:author>Dr. V. Lazzarini</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:06:27</itunes:duration>
	<enclosure length="1006487364" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/28-v_lazzarini_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. V. Lazzarini Abstract: Non-linear distortion methods form a set of elegant and computationally economic methods of synthesis and processing for musical applications. Among these, we find the famous Frequency Modulation synthesis, as developed by Chowning and made popular by Yamaha. In addition, various other techniques, including Discrete Summation Formulae, Waveshaping and Phase distortion, can be cast in the same group (and often be given alternative interpretations) of non-linear distortion methods. Research in the area has been very limited since the mid nineties, until a recent series of developments spurred new interest in these ideas. In this talk, I will first introduce briefly the principles of non-linear distortion, providing an overview of the area. I will then follow this with a tour of recent work, which will include adaptive methods, virtual analogue models and analysis-synthesis applications.</itunes:subtitle><itunes:summary>Speaker: Dr. V. Lazzarini Abstract: Non-linear distortion methods form a set of elegant and computationally economic methods of synthesis and processing for musical applications. Among these, we find the famous Frequency Modulation synthesis, as developed by Chowning and made popular by Yamaha. In addition, various other techniques, including Discrete Summation Formulae, Waveshaping and Phase distortion, can be cast in the same group (and often be given alternative interpretations) of non-linear distortion methods. Research in the area has been very limited since the mid nineties, until a recent series of developments spurred new interest in these ideas. In this talk, I will first introduce briefly the principles of non-linear distortion, providing an overview of the area. I will then follow this with a tour of recent work, which will include adaptive methods, virtual analogue models and analysis-synthesis applications.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Programming stem cells: modeling stem cell dynamics and organ development</title>
	<link>http://www.hamilton.ie/seminars/videos/27-y_setty_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/27-y_setty_hi.mp4</guid>
	<pubDate>Wed, 23 Feb 2011 00:00:27 +0000</pubDate>
	<description>Speaker:

Dr. Y. Setty


Abstract:

In recent years, we have used software engineering tools to develop reactive models to simulate and analyze the development of organs. The modeled systems embody highly complex and dynamic processes, by which a set of precursor stem cells proliferate, differentiate and move, to form a functioning tissue. Three organs from diverse evolutionary organisms have been thus modeled: the mouse pancreas, the C. elegans gonad, and partial rodent brain development. Analysis and execution of the models provided dynamic representation of the development, anticipated known experimental results and proposed novel testable predictions. In my talk, I will l discuss challenges, goals and achievement in this direction in science.</description>
	<itunes:author>Dr. Y. Setty</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>40:32</itunes:duration>
	<enclosure length="722930977" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/27-y_setty_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. Y. Setty Abstract: In recent years, we have used software engineering tools to develop reactive models to simulate and analyze the development of organs. The modeled systems embody highly complex and dynamic processes, by which a set of precursor stem cells proliferate, differentiate and move, to form a functioning tissue. Three organs from diverse evolutionary organisms have been thus modeled: the mouse pancreas, the C. elegans gonad, and partial rodent brain development. Analysis and execution of the models provided dynamic representation of the development, anticipated known experimental results and proposed novel testable predictions. In my talk, I will l discuss challenges, goals and achievement in this direction in science.</itunes:subtitle><itunes:summary>Speaker: Dr. Y. Setty Abstract: In recent years, we have used software engineering tools to develop reactive models to simulate and analyze the development of organs. The modeled systems embody highly complex and dynamic processes, by which a set of precursor stem cells proliferate, differentiate and move, to form a functioning tissue. Three organs from diverse evolutionary organisms have been thus modeled: the mouse pancreas, the C. elegans gonad, and partial rodent brain development. Analysis and execution of the models provided dynamic representation of the development, anticipated known experimental results and proposed novel testable predictions. In my talk, I will l discuss challenges, goals and achievement in this direction in science.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Vehicle-2-x Communication</title>
	<link>http://www.hamilton.ie/seminars/videos/26-i_radusch_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/26-i_radusch_hi.mp4</guid>
	<pubDate>Fri, 18 Feb 2011 00:00:26 +0000</pubDate>
	<description>Speaker:

Dr. I. Radusch


Abstract:

Future drivers and vehicles will benefit from upcoming integrated communication devices three-fold.  Communication will increase safety and efficiency in traffic as well as making driving more enjoyable.  Upcoming field operational tests will assess if available standards and implementations are ready for wide scale deployment.  Additionally, simulation environments such as VSimRTI allow comprehensive pre-validation of novel vehicle functions utilizing vehicle-2-x communication.</description>
	<itunes:author>Dr. I. Radusch</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:09:21</itunes:duration>
	<enclosure length="1046980046" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/26-i_radusch_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. I. Radusch Abstract: Future drivers and vehicles will benefit from upcoming integrated communication devices three-fold. Communication will increase safety and efficiency in traffic as well as making driving more enjoyable. Upcoming field operational tests will assess if available standards and implementations are ready for wide scale deployment. Additionally, simulation environments such as VSimRTI allow comprehensive pre-validation of novel vehicle functions utilizing vehicle-2-x communication.</itunes:subtitle><itunes:summary>Speaker: Dr. I. Radusch Abstract: Future drivers and vehicles will benefit from upcoming integrated communication devices three-fold. Communication will increase safety and efficiency in traffic as well as making driving more enjoyable. Upcoming field operational tests will assess if available standards and implementations are ready for wide scale deployment. Additionally, simulation environments such as VSimRTI allow comprehensive pre-validation of novel vehicle functions utilizing vehicle-2-x communication.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Event-Driven Automation in Laser-Scanning Microscopy Applied to Live Cell Imaging</title>
	<link>http://www.hamilton.ie/seminars/videos/25-j_wenus_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/25-j_wenus_hi.mp4</guid>
	<pubDate>Wed, 15 Dec 2010 00:00:25 +0000</pubDate>
	<description>Speaker:

Dr. J. Wenus


Abstract:

Microscopy of living cells is heavily employed in biomedicine to understand the mechanisms of disease progression and to develop novel pharmaceuticals. In particular, confocal microscopy which relies on laser-based excitation of fluorescent cellular biomarkers is frequently used for understanding molecular actions of therapeutic drugs to abnormal cells. However, prolonged exposure to highly energetic laser radiation often leads to light induced cell death before any spontaneous effects can occur --- an effect known as 'photo-toxicity'. To address this problem we have developed an automated live-cell imaging system 'ALISSA' which employs online image processing and analysis to automatically detect biological events and then trigger appropriate changes in the image acquisition settings. This way we minimize the photo-toxicity, obtain higher quality of the imaging data and minimize direct user involvement by introducing more automation to the whole experimental process. So far, ALISSA has been used in studies on cancer cells and neurons at the Royal College of Surgeons in Ireland and it is currently under development aimed towards applications in commercial high content screening systems.&#13;&#13;This is a joint work between the RCSI, Dublin (H. Huber, H. Duessmann, J. Prehn) and the Hamilton Institute, NUI Maynooth (J. Wenus, P. Paul, D. Kalamatianos, P. Wellstead) with involvement from Siemens and Carl Zeiss MicroImaging.&#13;&#13;We gratefully acknowledge financial support from the National Biophotonics and Imaging Platform Ireland (HEA PRTLI Cycle 4).</description>
	<itunes:author>Dr. J. Wenus</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>38:29</itunes:duration>
	<enclosure length="591271403" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/25-j_wenus_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. J. Wenus Abstract: Microscopy of living cells is heavily employed in biomedicine to understand the mechanisms of disease progression and to develop novel pharmaceuticals. In particular, confocal microscopy which relies on laser-based excitation of fluorescent cellular biomarkers is frequently used for understanding molecular actions of therapeutic drugs to abnormal cells. However, prolonged exposure to highly energetic laser radiation often leads to light induced cell death before any spontaneous effects can occur --- an effect known as 'photo-toxicity'. To address this problem we have developed an automated live-cell imaging system 'ALISSA' which employs online image processing and analysis to automatically detect biological events and then trigger appropriate changes in the image acquisition settings. This way we minimize the photo-toxicity, obtain higher quality of the imaging data and minimize direct user involvement by introducing more automation to the whole experimental process. So far, ALISSA has been used in studies on cancer cells and neurons at the Royal College of Surgeons in Ireland and it is currently under development aimed towards applications in commercial high content screening systems. This is a joint work between the RCSI, Dublin (H. Huber, H. Duessmann, J. Prehn) and the Hamilton Institute, NUI Maynooth (J. Wenus, P. Paul, D. Kalamatianos, P. Wellstead) with involvement from Siemens and Carl Zeiss MicroImaging. We gratefully acknowledge financial support from the National Biophotonics and Imaging Platform Ireland (HEA PRTLI Cycle 4).</itunes:subtitle><itunes:summary>Speaker: Dr. J. Wenus Abstract: Microscopy of living cells is heavily employed in biomedicine to understand the mechanisms of disease progression and to develop novel pharmaceuticals. In particular, confocal microscopy which relies on laser-based excitation of fluorescent cellular biomarkers is frequently used for understanding molecular actions of therapeutic drugs to abnormal cells. However, prolonged exposure to highly energetic laser radiation often leads to light induced cell death before any spontaneous effects can occur --- an effect known as 'photo-toxicity'. To address this problem we have developed an automated live-cell imaging system 'ALISSA' which employs online image processing and analysis to automatically detect biological events and then trigger appropriate changes in the image acquisition settings. This way we minimize the photo-toxicity, obtain higher quality of the imaging data and minimize direct user involvement by introducing more automation to the whole experimental process. So far, ALISSA has been used in studies on cancer cells and neurons at the Royal College of Surgeons in Ireland and it is currently under development aimed towards applications in commercial high content screening systems. This is a joint work between the RCSI, Dublin (H. Huber, H. Duessmann, J. Prehn) and the Hamilton Institute, NUI Maynooth (J. Wenus, P. Paul, D. Kalamatianos, P. Wellstead) with involvement from Siemens and Carl Zeiss MicroImaging. We gratefully acknowledge financial support from the National Biophotonics and Imaging Platform Ireland (HEA PRTLI Cycle 4).</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Spectrum Sharing in Cognitive Radio with Quantized Channel Information</title>
	<link>http://www.hamilton.ie/seminars/videos/23-s_dey_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/23-s_dey_hi.mp4</guid>
	<pubDate>Thu, 15 Jul 2010 00:00:23 +0100</pubDate>
	<description>Speaker:

Dr. S. Dey


Abstract:

In this talk, we consider a wideband spectrum sharing system where a secondary user can share a number of orthogonal frequency bands each licensed to a distinct primary user. We address the problem of optimum secondary transmit power allocation for its ergodic capacity maximization subject to an average sum (across the bands) transmit power constraint and individual average interference constraints on the primary users. The major contribution of our work lies in considering quantized channel state information (CSI) (for the vector channel space consisting of all secondary-to-secondary and secondary-to-primary channels) at the secondary transmitter as opposed to the prevalent assumption of full CSI in most existing work.It is assumed that a band manager or a cognitive radio service provider has access to the full CSI information from the secondary and primary receivers and designs (offline) an optimal power codebook based on the statistical information(channel distributions) of the channels and feeds back the index of the codebook to the secondary transmitter for every channel realization in real-time, via a delay-free noiseless limited feedback channel. A modified Generalized Lloyds-type algorithm (GLA) is designed for deriving the optimal power codebook,which is proved to be globally convergent and empirically consistent. An approximate quantized power allocation (AQPA) algorithm is presented,that performs very close to its GLA based counterpart for large number of feedback bits and is significantly faster.We also present an extension of the modified GLA based quantized power codebook design algorithm for the case when the feedback channel is noisy. Numerical studies illustrate that with only 3-4 bits of feedback, the modified GLA based algorithms provide secondary ergodic capacity very close to that achieved by full CSI and with only as little as 4 bits of feedback, AQPA provides a comparable performance,thus making it an attractive choice for practical implementation.Various open problems and future research directions will also be discussed.</description>
	<itunes:author>Dr. S. Dey</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>59:12</itunes:duration>
	<enclosure length="363795000" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/23-s_dey_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. S. Dey Abstract: In this talk, we consider a wideband spectrum sharing system where a secondary user can share a number of orthogonal frequency bands each licensed to a distinct primary user. We address the problem of optimum secondary transmit power allocation for its ergodic capacity maximization subject to an average sum (across the bands) transmit power constraint and individual average interference constraints on the primary users. The major contribution of our work lies in considering quantized channel state information (CSI) (for the vector channel space consisting of all secondary-to-secondary and secondary-to-primary channels) at the secondary transmitter as opposed to the prevalent assumption of full CSI in most existing work.It is assumed that a band manager or a cognitive radio service provider has access to the full CSI information from the secondary and primary receivers and designs (offline) an optimal power codebook based on the statistical information(channel distributions) of the channels and feeds back the index of the codebook to the secondary transmitter for every channel realization in real-time, via a delay-free noiseless limited feedback channel. A modified Generalized Lloyds-type algorithm (GLA) is designed for deriving the optimal power codebook,which is proved to be globally convergent and empirically consistent. An approximate quantized power allocation (AQPA) algorithm is presented,that performs very close to its GLA based counterpart for large number of feedback bits and is significantly faster.We also present an extension of the modified GLA based quantized power codebook design algorithm for the case when the feedback channel is noisy. Numerical studies illustrate that with only 3-4 bits of feedback, the modified GLA based algorithms provide secondary ergodic capacity very close to that achieved by full CSI and with only as little as 4 bits of feedback, AQPA provides a comparable performance,thus making it an attractive choice for practical implementation.Various open problems and future research directions will also be discussed.</itunes:subtitle><itunes:summary>Speaker: Dr. S. Dey Abstract: In this talk, we consider a wideband spectrum sharing system where a secondary user can share a number of orthogonal frequency bands each licensed to a distinct primary user. We address the problem of optimum secondary transmit power allocation for its ergodic capacity maximization subject to an average sum (across the bands) transmit power constraint and individual average interference constraints on the primary users. The major contribution of our work lies in considering quantized channel state information (CSI) (for the vector channel space consisting of all secondary-to-secondary and secondary-to-primary channels) at the secondary transmitter as opposed to the prevalent assumption of full CSI in most existing work.It is assumed that a band manager or a cognitive radio service provider has access to the full CSI information from the secondary and primary receivers and designs (offline) an optimal power codebook based on the statistical information(channel distributions) of the channels and feeds back the index of the codebook to the secondary transmitter for every channel realization in real-time, via a delay-free noiseless limited feedback channel. A modified Generalized Lloyds-type algorithm (GLA) is designed for deriving the optimal power codebook,which is proved to be globally convergent and empirically consistent. An approximate quantized power allocation (AQPA) algorithm is presented,that performs very close to its GLA based counterpart for large number of feedback bits and is significantly faster.We also present an extension of the modified GLA based quantized power codebook design algorithm for the case when the feedback channel is noisy. Numerical studies illustrate that with only 3-4 bits of feedback, the modified GLA based algorithms provide secondary ergodic capacity very close to that achieved by full CSI and with only as little as 4 bits of feedback, AQPA provides a comparable performance,thus making it an attractive choice for practical implementation.Various open problems and future research directions will also be discussed.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Large deviation theory and its applications in statistical mechanics</title>
	<link>http://www.hamilton.ie/seminars/videos/22-h_touchette_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/22-h_touchette_hi.mp4</guid>
	<pubDate>Wed, 24 Mar 2010 00:00:22 +0000</pubDate>
	<description>Speaker:

Dr. H. Touchette


Abstract:

The theory of large deviations, initiated by Cramer in the 1930s and later developed by Donsker and Varadhan in the 1970s, is an active field of probability theory that finds applications in many subjects, including statistics, finance, actuarial mathematics, engineering, and physics.  Its use in physics dates back to the work of Ruelle, Lanford, and the late John Lewis, among others, who used concepts of large deviations in the 1970s and 1980s to study equilibrium systems and to put statistical mechanics on a rigorous footing.&#13;&#13;I will give in this talk a survey of these applications, as well as more recent ones related to long-range equilibrium systems and nonequilibrium systems, at a level which assumes little knowledge of statistical mechanics or large deviations.  As we cover these applications, we will see that large deviation theory and statistical mechanics share a common mathematical structure, which Lewis was well aware of, and which can be summarized by saying that an entropy function is to a physicist what a large deviation function (or rate function) is to a mathematician.  Other connections of this sort will be discussed.</description>
	<itunes:author>Dr. H. Touchette</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>54:21</itunes:duration>
	<enclosure length="318978542" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/22-h_touchette_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. H. Touchette Abstract: The theory of large deviations, initiated by Cramer in the 1930s and later developed by Donsker and Varadhan in the 1970s, is an active field of probability theory that finds applications in many subjects, including statistics, finance, actuarial mathematics, engineering, and physics. Its use in physics dates back to the work of Ruelle, Lanford, and the late John Lewis, among others, who used concepts of large deviations in the 1970s and 1980s to study equilibrium systems and to put statistical mechanics on a rigorous footing. I will give in this talk a survey of these applications, as well as more recent ones related to long-range equilibrium systems and nonequilibrium systems, at a level which assumes little knowledge of statistical mechanics or large deviations. As we cover these applications, we will see that large deviation theory and statistical mechanics share a common mathematical structure, which Lewis was well aware of, and which can be summarized by saying that an entropy function is to a physicist what a large deviation function (or rate function) is to a mathematician. Other connections of this sort will be discussed.</itunes:subtitle><itunes:summary>Speaker: Dr. H. Touchette Abstract: The theory of large deviations, initiated by Cramer in the 1930s and later developed by Donsker and Varadhan in the 1970s, is an active field of probability theory that finds applications in many subjects, including statistics, finance, actuarial mathematics, engineering, and physics. Its use in physics dates back to the work of Ruelle, Lanford, and the late John Lewis, among others, who used concepts of large deviations in the 1970s and 1980s to study equilibrium systems and to put statistical mechanics on a rigorous footing. I will give in this talk a survey of these applications, as well as more recent ones related to long-range equilibrium systems and nonequilibrium systems, at a level which assumes little knowledge of statistical mechanics or large deviations. As we cover these applications, we will see that large deviation theory and statistical mechanics share a common mathematical structure, which Lewis was well aware of, and which can be summarized by saying that an entropy function is to a physicist what a large deviation function (or rate function) is to a mathematician. Other connections of this sort will be discussed.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Asymptotic Stability Region of Slotted Aloha</title>
	<link>http://www.hamilton.ie/seminars/videos/21-c_bordenave_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/21-c_bordenave_hi.mp4</guid>
	<pubDate>Wed, 03 Mar 2010 00:00:21 +0000</pubDate>
	<description>Speaker:

Dr. C. Bordenave


Abstract:

Consider N queues with non-homogeneous packet arrivals.  The queues share a common communication channel.  At the beginning of each timeslot, if queue i has a packet, it attempts to access the channel with probability p_i.  This attempt is successful when no other queue attempts to access the channel.  For arbitrary N, the stability region of such queuing system is a long standing open problem.  However as the number of queues N goes to infinity, it is possible to compute the asymptotic stability region.&#13;&#13;This is a joint work with David McDonald (Ottawa) and Alexandre Proutiere (Microsoft).</description>
	<itunes:author>Dr. C. Bordenave</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>56:21</itunes:duration>
	<enclosure length="333092631" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/21-c_bordenave_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. C. Bordenave Abstract: Consider N queues with non-homogeneous packet arrivals. The queues share a common communication channel. At the beginning of each timeslot, if queue i has a packet, it attempts to access the channel with probability p_i. This attempt is successful when no other queue attempts to access the channel. For arbitrary N, the stability region of such queuing system is a long standing open problem. However as the number of queues N goes to infinity, it is possible to compute the asymptotic stability region. This is a joint work with David McDonald (Ottawa) and Alexandre Proutiere (Microsoft).</itunes:subtitle><itunes:summary>Speaker: Dr. C. Bordenave Abstract: Consider N queues with non-homogeneous packet arrivals. The queues share a common communication channel. At the beginning of each timeslot, if queue i has a packet, it attempts to access the channel with probability p_i. This attempt is successful when no other queue attempts to access the channel. For arbitrary N, the stability region of such queuing system is a long standing open problem. However as the number of queues N goes to infinity, it is possible to compute the asymptotic stability region. This is a joint work with David McDonald (Ottawa) and Alexandre Proutiere (Microsoft).</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>On the stabilization of discrete-time positive switched systems by means of Lyapunov based switching strategies</title>
	<link>http://www.hamilton.ie/seminars/videos/20-e_valcher_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/20-e_valcher_hi.mp4</guid>
	<pubDate>Fri, 19 Feb 2010 00:00:20 +0000</pubDate>
	<description>Speaker:

Prof. M. E. Valcher


Abstract:

Abstract: Positive switched systems typically arise to cope with two distinct modeling needs. On the one hand, switching among different models mathematically formalizes the fact that the system laws change under different operating conditions.On the other hand, the variables to be modeled may be quantities that have no meaning unless positive (temperatures, pressures,population levels, ...).&#13;&#13;In this talk we consider the class of discrete-time positive switched systems, described, at each time t, by the first-order difference equation:&#13;&#13;x(t+1) = A_{\sigma(t)} x(t),&#13;&#13;where \sigma is a switching sequence, taking values in the finite set {1,2}, and for each index i, A_i is an n x n positive matrix. Assuming that both A_1 and A_2 are not Schur matrices, we focus on the stabilizability of the system, namely on the possibility of finding switching strategies that drive to zero the state evolution corresponding to every positive initial state x(0). To this end, we resort to state feedback switching laws, whose value at the time t depends on the value of some Lyapunov function in x(t).&#13;&#13;We first explore quadratic positive definite functions, by extending a technique described by De Carlo et al.. Later, by taking advantage of the system positivity, we show that other classes of Lyapunov functions, such as linear copositive and quadratic copositive ones, may be used to design state-dependent stabilizing switching laws, and some of them may be designed under weaker conditions on the pair of matrices (A_1,A_2) with respect to those required for quadratic stabilizability.&#13;&#13;Some comparisons between the performances of the switching strategies are given.</description>
	<itunes:author>Prof. M. E. Valcher</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>42:59</itunes:duration>
	<enclosure length="251455922" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/20-e_valcher_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. M. E. Valcher Abstract: Abstract: Positive switched systems typically arise to cope with two distinct modeling needs. On the one hand, switching among different models mathematically formalizes the fact that the system laws change under different operating conditions.On the other hand, the variables to be modeled may be quantities that have no meaning unless positive (temperatures, pressures,population levels, ...). In this talk we consider the class of discrete-time positive switched systems, described, at each time t, by the first-order difference equation: x(t+1) = A_{\sigma(t)} x(t), where \sigma is a switching sequence, taking values in the finite set {1,2}, and for each index i, A_i is an n x n positive matrix. Assuming that both A_1 and A_2 are not Schur matrices, we focus on the stabilizability of the system, namely on the possibility of finding switching strategies that drive to zero the state evolution corresponding to every positive initial state x(0). To this end, we resort to state feedback switching laws, whose value at the time t depends on the value of some Lyapunov function in x(t). We first explore quadratic positive definite functions, by extending a technique described by De Carlo et al.. Later, by taking advantage of the system positivity, we show that other classes of Lyapunov functions, such as linear copositive and quadratic copositive ones, may be used to design state-dependent stabilizing switching laws, and some of them may be designed under weaker conditions on the pair of matrices (A_1,A_2) with respect to those required for quadratic stabilizability. Some comparisons between the performances of the switching strategies are given.</itunes:subtitle><itunes:summary>Speaker: Prof. M. E. Valcher Abstract: Abstract: Positive switched systems typically arise to cope with two distinct modeling needs. On the one hand, switching among different models mathematically formalizes the fact that the system laws change under different operating conditions.On the other hand, the variables to be modeled may be quantities that have no meaning unless positive (temperatures, pressures,population levels, ...). In this talk we consider the class of discrete-time positive switched systems, described, at each time t, by the first-order difference equation: x(t+1) = A_{\sigma(t)} x(t), where \sigma is a switching sequence, taking values in the finite set {1,2}, and for each index i, A_i is an n x n positive matrix. Assuming that both A_1 and A_2 are not Schur matrices, we focus on the stabilizability of the system, namely on the possibility of finding switching strategies that drive to zero the state evolution corresponding to every positive initial state x(0). To this end, we resort to state feedback switching laws, whose value at the time t depends on the value of some Lyapunov function in x(t). We first explore quadratic positive definite functions, by extending a technique described by De Carlo et al.. Later, by taking advantage of the system positivity, we show that other classes of Lyapunov functions, such as linear copositive and quadratic copositive ones, may be used to design state-dependent stabilizing switching laws, and some of them may be designed under weaker conditions on the pair of matrices (A_1,A_2) with respect to those required for quadratic stabilizability. Some comparisons between the performances of the switching strategies are given.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>A Phylogenetic Hidden Markov Model for Immune Epitope Discovery</title>
	<link>http://www.hamilton.ie/seminars/videos/19-c_seoighe_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/19-c_seoighe_hi.mp4</guid>
	<pubDate>Wed, 09 Dec 2009 00:00:19 +0000</pubDate>
	<description>Speaker:

Prof. C. Seoighe


Abstract:

We describe a phylogenetic model of protein-coding sequence evolution that includes environmental variables. We apply it to a set of viral sequences from individuals with known human leukocyte antigen (HLA) genotype and include parameters to model selective pressures affecting mutations within immunogenic (epitope) regions that facilitate viral evasion of immune responses. We combine this evolutionary model with a hidden Markov model to identify regions of the HIV-1 genome that evolve under immune pressure in the presence of specific HLA class I alleles and may therefore represent potential T cell epitopes. This phylogenetic hidden Markov model (phylo-HMM) provides a probabilistic framework that can be combined with sequence or structural information to enhance epitope prediction.</description>
	<itunes:author>Prof. C. Seoighe</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:11:37</itunes:duration>
	<enclosure length="426472930" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/19-c_seoighe_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. C. Seoighe Abstract: We describe a phylogenetic model of protein-coding sequence evolution that includes environmental variables. We apply it to a set of viral sequences from individuals with known human leukocyte antigen (HLA) genotype and include parameters to model selective pressures affecting mutations within immunogenic (epitope) regions that facilitate viral evasion of immune responses. We combine this evolutionary model with a hidden Markov model to identify regions of the HIV-1 genome that evolve under immune pressure in the presence of specific HLA class I alleles and may therefore represent potential T cell epitopes. This phylogenetic hidden Markov model (phylo-HMM) provides a probabilistic framework that can be combined with sequence or structural information to enhance epitope prediction.</itunes:subtitle><itunes:summary>Speaker: Prof. C. Seoighe Abstract: We describe a phylogenetic model of protein-coding sequence evolution that includes environmental variables. We apply it to a set of viral sequences from individuals with known human leukocyte antigen (HLA) genotype and include parameters to model selective pressures affecting mutations within immunogenic (epitope) regions that facilitate viral evasion of immune responses. We combine this evolutionary model with a hidden Markov model to identify regions of the HIV-1 genome that evolve under immune pressure in the presence of specific HLA class I alleles and may therefore represent potential T cell epitopes. This phylogenetic hidden Markov model (phylo-HMM) provides a probabilistic framework that can be combined with sequence or structural information to enhance epitope prediction.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Stochastic Modelling of T Cell Repertoire Diversity</title>
	<link>http://www.hamilton.ie/seminars/videos/18-c_molina-paris_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/18-c_molina-paris_hi.mp4</guid>
	<pubDate>Wed, 18 Nov 2009 00:00:18 +0000</pubDate>
	<description>Speaker:

Dr. C. Molina-París


Abstract:

T cells are specialised white blood cells that protect the body from infection and are also able to kill infected cells. T cells are characterised by the presence of a special receptor on their cell surface called T cell receptor (TCR). The specificity of the T cell, namely which pathogens it can recognise, is determined by the molecular structure of its TCR. T cells can be classified according to their TCRs. All T cells that have identical TCRs are said to belong to the same clonotype. There are two types of T cells: naive and memory. Naive T cells have not yet encountered pathogens and memory T cells have already encountered pathogen. In this talk, I will only consider the class of naive T cells. A diverse naive T cell pool is essential to protect against novel infections, as the immune system cannot predict which pathogens the organism will be exposed to during its life-time. A healthy adult human possesses approximately 10^(11) naive T cells, which belong to about 10^7-10^8 different clonotypes. The reliability of the immune response to pathogenic challenge depends critically on the size (how many cells) and diversity (how many different TCRs or clonotypes) of the naive T cell pool of the individual. Experimental evidence suggests that interactions between TCRs with self-peptides (self-peptide = a fragment of a household protein) displayed on the surface of specialised cells, called antigen presenting cells (APCs), are important in controlling naive T cell numbers. Naive T cells undergo one round of cell division after receiving a survival stimulus from these specialized APCs. Whether or not a particular naive T cell can receive a survival signal from an specialized APC depends both on the TCR it expresses and the array of self-peptides displayed on the surface of the APC. Competition amongst naive T cells for these interactions regulates the diversity of the naive T cell pool.&#13;&#13;We have made use of a probabilistic (stochastic) model to describe this competition. In particular, we have modeled the time evolution of the number of T cells belonging to a particular clonotype. Our results indicate that competition maximizes TCR diversity by promoting the survival of T cell clonotypes that are most different from each other in terms of the self-peptides they are able to recognise.</description>
	<itunes:author>Dr. C. Molina-París</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>54:15</itunes:duration>
	<enclosure length="323607478" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/18-c_molina-paris_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. C. Molina-París Abstract: T cells are specialised white blood cells that protect the body from infection and are also able to kill infected cells. T cells are characterised by the presence of a special receptor on their cell surface called T cell receptor (TCR). The specificity of the T cell, namely which pathogens it can recognise, is determined by the molecular structure of its TCR. T cells can be classified according to their TCRs. All T cells that have identical TCRs are said to belong to the same clonotype. There are two types of T cells: naive and memory. Naive T cells have not yet encountered pathogens and memory T cells have already encountered pathogen. In this talk, I will only consider the class of naive T cells. A diverse naive T cell pool is essential to protect against novel infections, as the immune system cannot predict which pathogens the organism will be exposed to during its life-time. A healthy adult human possesses approximately 10^(11) naive T cells, which belong to about 10^7-10^8 different clonotypes. The reliability of the immune response to pathogenic challenge depends critically on the size (how many cells) and diversity (how many different TCRs or clonotypes) of the naive T cell pool of the individual. Experimental evidence suggests that interactions between TCRs with self-peptides (self-peptide = a fragment of a household protein) displayed on the surface of specialised cells, called antigen presenting cells (APCs), are important in controlling naive T cell numbers. Naive T cells undergo one round of cell division after receiving a survival stimulus from these specialized APCs. Whether or not a particular naive T cell can receive a survival signal from an specialized APC depends both on the TCR it expresses and the array of self-peptides displayed on the surface of the APC. Competition amongst naive T cells for these interactions regulates the diversity of the naive T cell pool. We have made use of a probabilistic (stochastic) model to describe this competition. In particular, we have modeled the time evolution of the number of T cells belonging to a particular clonotype. Our results indicate that competition maximizes TCR diversity by promoting the survival of T cell clonotypes that are most different from each other in terms of the self-peptides they are able to recognise.</itunes:subtitle><itunes:summary>Speaker: Dr. C. Molina-París Abstract: T cells are specialised white blood cells that protect the body from infection and are also able to kill infected cells. T cells are characterised by the presence of a special receptor on their cell surface called T cell receptor (TCR). The specificity of the T cell, namely which pathogens it can recognise, is determined by the molecular structure of its TCR. T cells can be classified according to their TCRs. All T cells that have identical TCRs are said to belong to the same clonotype. There are two types of T cells: naive and memory. Naive T cells have not yet encountered pathogens and memory T cells have already encountered pathogen. In this talk, I will only consider the class of naive T cells. A diverse naive T cell pool is essential to protect against novel infections, as the immune system cannot predict which pathogens the organism will be exposed to during its life-time. A healthy adult human possesses approximately 10^(11) naive T cells, which belong to about 10^7-10^8 different clonotypes. The reliability of the immune response to pathogenic challenge depends critically on the size (how many cells) and diversity (how many different TCRs or clonotypes) of the naive T cell pool of the individual. Experimental evidence suggests that interactions between TCRs with self-peptides (self-peptide = a fragment of a household protein) displayed on the surface of specialised cells, called antigen presenting cells (APCs), are important in controlling naive T cell numbers. Naive T cells undergo one round of cell division after receiving a survival stimulus from these specialized APCs. Whether or not a particular naive T cell can receive a survival signal from an specialized APC depends both on the TCR it expresses and the array of self-peptides displayed on the surface of the APC. Competition amongst naive T cells for these interactions regulates the diversity of the naive T cell pool. We have made use of a probabilistic (stochastic) model to describe this competition. In particular, we have modeled the time evolution of the number of T cells belonging to a particular clonotype. Our results indicate that competition maximizes TCR diversity by promoting the survival of T cell clonotypes that are most different from each other in terms of the self-peptides they are able to recognise.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>The Brain is an Embedding Machine</title>
	<link>http://www.hamilton.ie/seminars/videos/17-r_clement_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/17-r_clement_hi.mp4</guid>
	<pubDate>Wed, 30 Sep 2009 00:00:17 +0100</pubDate>
	<description>Speaker:

Dr. R. Clement


Abstract:

Neural responses are often generated by the physical movement of an object or a limb.  Each such set of responses corresponds a point on a smooth geometrical surface. To be able to manipulate such a representation the brain assigns coordinates to every point on the surface --- a procedure known as embedding. &#13;&#13;In the first part of this talk the properties of the early visual system are exploited to produce a model of coordinate space based on features such as colour, orientation and movement.  The feature model has the advantage over the geometric model that it is not restricted to 2 or 3-dimensional pictorial representations. &#13;&#13;The neural mechanism is highly suited to embedding.  In the second part of the talk the feature based coordinate space will be used to explore the neural embedding of the sensory stimuli encountered in binocular vision and in the movement of the eye. &#13;&#13;In the final part of the talk the limitations on our ability to see objects arising from the neural embedding procedures will be outlined and in particular, what can be "seen" of the shape of surfaces embedded in more than three dimensions.</description>
	<itunes:author>Dr. R. Clement</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>40:30</itunes:duration>
	<enclosure length="239464661" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/17-r_clement_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. R. Clement Abstract: Neural responses are often generated by the physical movement of an object or a limb. Each such set of responses corresponds a point on a smooth geometrical surface. To be able to manipulate such a representation the brain assigns coordinates to every point on the surface --- a procedure known as embedding. In the first part of this talk the properties of the early visual system are exploited to produce a model of coordinate space based on features such as colour, orientation and movement. The feature model has the advantage over the geometric model that it is not restricted to 2 or 3-dimensional pictorial representations. The neural mechanism is highly suited to embedding. In the second part of the talk the feature based coordinate space will be used to explore the neural embedding of the sensory stimuli encountered in binocular vision and in the movement of the eye. In the final part of the talk the limitations on our ability to see objects arising from the neural embedding procedures will be outlined and in particular, what can be "seen" of the shape of surfaces embedded in more than three dimensions.</itunes:subtitle><itunes:summary>Speaker: Dr. R. Clement Abstract: Neural responses are often generated by the physical movement of an object or a limb. Each such set of responses corresponds a point on a smooth geometrical surface. To be able to manipulate such a representation the brain assigns coordinates to every point on the surface --- a procedure known as embedding. In the first part of this talk the properties of the early visual system are exploited to produce a model of coordinate space based on features such as colour, orientation and movement. The feature model has the advantage over the geometric model that it is not restricted to 2 or 3-dimensional pictorial representations. The neural mechanism is highly suited to embedding. In the second part of the talk the feature based coordinate space will be used to explore the neural embedding of the sensory stimuli encountered in binocular vision and in the movement of the eye. In the final part of the talk the limitations on our ability to see objects arising from the neural embedding procedures will be outlined and in particular, what can be "seen" of the shape of surfaces embedded in more than three dimensions.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>From idea to product: Best practices for improving the impact of product development in large organistations</title>
	<link>http://www.hamilton.ie/seminars/videos/16-n_pettit_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/16-n_pettit_hi.mp4</guid>
	<pubDate>Thu, 17 Sep 2009 00:00:16 +0100</pubDate>
	<description>Speaker:

Dr. N. Pettit


Abstract:

As part of a wider improvement initiative across all parts of our value chain, Danfoss, in 2007, launched an initiative to significantly improve its product development processes.  The goal was to make radical improvements on the dimensions of: value to customer, time to profit, unit cost and quality. In order to do this, we looked around to identify industry-wide accepted best practices to build on.  When starting a similar program in production 4 years earlier, there were clear accepted practices that had proved themselves in multiple companies and industry sectors. These are centred on the manufacturing philosophy of Toyota and generally grouped under the term "lean production". These would often be merged with another set of practices termed "six sigma", that came out of Motorola and championed by GE. &#13;&#13;In product development we found a different picture. Although many schools of thought have been adopted by industries, often trying to build on the back of lean production ideas (termed unsurprisingly "Lean product development"), these were found to be relatively immature in their application and narrow in what dimensions they improved when applied. Many proponents backed different tools and methods out of these schools as the "best" best practice, but non appeared to have a track record of significant impact on the multiple dimensions we needed, to justify their claims. &#13;&#13;We undertook a significant exercise to look at the internal processes we wanted to improve. We then separated the tools and methods from the different schools of thought to identify which tools and methods were relevant to our processes and had a track record of success along at least one dimension. This led us to identify an underlying empirical set of principles that really seemed to drive true impact along all the dimensions we were looking for. Once we had these, we were able to go back and pick and choose a variety tools and methods from the different schools of thought, that embodied one or more of these principles --- stealing with pride. This gave us a set of tools that when used together would create the impact we were looking for. Finally we then created a system to adapt, improve and test these tools and methods before spreading them out, so that our people engaged in product development find them relevant, workable, and able to quickly deliver visible and significant improvement to their product development.&#13;&#13;The talk will outline some of these principles and methods we have built up in this journey.</description>
	<itunes:author>Dr. N. Pettit</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:13:25</itunes:duration>
	<enclosure length="430314665" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/16-n_pettit_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. N. Pettit Abstract: As part of a wider improvement initiative across all parts of our value chain, Danfoss, in 2007, launched an initiative to significantly improve its product development processes. The goal was to make radical improvements on the dimensions of: value to customer, time to profit, unit cost and quality. In order to do this, we looked around to identify industry-wide accepted best practices to build on. When starting a similar program in production 4 years earlier, there were clear accepted practices that had proved themselves in multiple companies and industry sectors. These are centred on the manufacturing philosophy of Toyota and generally grouped under the term "lean production". These would often be merged with another set of practices termed "six sigma", that came out of Motorola and championed by GE. In product development we found a different picture. Although many schools of thought have been adopted by industries, often trying to build on the back of lean production ideas (termed unsurprisingly "Lean product development"), these were found to be relatively immature in their application and narrow in what dimensions they improved when applied. Many proponents backed different tools and methods out of these schools as the "best" best practice, but non appeared to have a track record of significant impact on the multiple dimensions we needed, to justify their claims. We undertook a significant exercise to look at the internal processes we wanted to improve. We then separated the tools and methods from the different schools of thought to identify which tools and methods were relevant to our processes and had a track record of success along at least one dimension. This led us to identify an underlying empirical set of principles that really seemed to drive true impact along all the dimensions we were looking for. Once we had these, we were able to go back and pick and choose a variety tools and methods from the different schools of thought, that embodied one or more of these principles --- stealing with pride. This gave us a set of tools that when used together would create the impact we were looking for. Finally we then created a system to adapt, improve and test these tools and methods before spreading them out, so that our people engaged in product development find them relevant, workable, and able to quickly deliver visible and significant improvement to their product development. The talk will outline some of these principles and methods we have built up in this journey.</itunes:subtitle><itunes:summary>Speaker: Dr. N. Pettit Abstract: As part of a wider improvement initiative across all parts of our value chain, Danfoss, in 2007, launched an initiative to significantly improve its product development processes. The goal was to make radical improvements on the dimensions of: value to customer, time to profit, unit cost and quality. In order to do this, we looked around to identify industry-wide accepted best practices to build on. When starting a similar program in production 4 years earlier, there were clear accepted practices that had proved themselves in multiple companies and industry sectors. These are centred on the manufacturing philosophy of Toyota and generally grouped under the term "lean production". These would often be merged with another set of practices termed "six sigma", that came out of Motorola and championed by GE. In product development we found a different picture. Although many schools of thought have been adopted by industries, often trying to build on the back of lean production ideas (termed unsurprisingly "Lean product development"), these were found to be relatively immature in their application and narrow in what dimensions they improved when applied. Many proponents backed different tools and methods out of these schools as the "best" best practice, but non appeared to have a track record of significant impact on the multiple dimensions we needed, to justify their claims. We undertook a significant exercise to look at the internal processes we wanted to improve. We then separated the tools and methods from the different schools of thought to identify which tools and methods were relevant to our processes and had a track record of success along at least one dimension. This led us to identify an underlying empirical set of principles that really seemed to drive true impact along all the dimensions we were looking for. Once we had these, we were able to go back and pick and choose a variety tools and methods from the different schools of thought, that embodied one or more of these principles --- stealing with pride. This gave us a set of tools that when used together would create the impact we were looking for. Finally we then created a system to adapt, improve and test these tools and methods before spreading them out, so that our people engaged in product development find them relevant, workable, and able to quickly deliver visible and significant improvement to their product development. The talk will outline some of these principles and methods we have built up in this journey.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>On the Design of Doubly-Generalized Low-Density Parity-Check Code</title>
	<link>http://www.hamilton.ie/seminars/videos/15-m_flanagan_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/15-m_flanagan_hi.mp4</guid>
	<pubDate>Wed, 26 Aug 2009 00:00:15 +0100</pubDate>
	<description>Speaker:

Dr. M. Flanagan


Abstract:

Doubly-generalized low-density parity-check (D-GLDPC) codes offer an attractive compromise between algebraic and random code design philosophies.  In this talk we introduce the concept of D-GLDPC codes,and then provide a solution for the asymptotic growth rate of the weight distribution of any D-GLDPC ensemble.  This tool is then used for detailed analysis of a case study, namely, a rate-1/2 D-GLDPC ensemble where all the check nodes are (7,4) Hamming codes and all the variable nodes are length-7 single parity-check codes.  It is illustrated how the variable node representations can heavily affect the code properties and how different variable node representations can be combined within the same graph to enhance some of the code parameters.  The analysis is conducted over the binary erasure channel.  Interesting features of the new codes include the capability of achieving a good compromise between waterfall and error floor performance while preserving graphical regularity, and values of threshold outperforming LDPC counterparts.</description>
	<itunes:author>Dr. M. Flanagan</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>52:55</itunes:duration>
	<enclosure length="307242431" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/15-m_flanagan_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. M. Flanagan Abstract: Doubly-generalized low-density parity-check (D-GLDPC) codes offer an attractive compromise between algebraic and random code design philosophies. In this talk we introduce the concept of D-GLDPC codes,and then provide a solution for the asymptotic growth rate of the weight distribution of any D-GLDPC ensemble. This tool is then used for detailed analysis of a case study, namely, a rate-1/2 D-GLDPC ensemble where all the check nodes are (7,4) Hamming codes and all the variable nodes are length-7 single parity-check codes. It is illustrated how the variable node representations can heavily affect the code properties and how different variable node representations can be combined within the same graph to enhance some of the code parameters. The analysis is conducted over the binary erasure channel. Interesting features of the new codes include the capability of achieving a good compromise between waterfall and error floor performance while preserving graphical regularity, and values of threshold outperforming LDPC counterparts.</itunes:subtitle><itunes:summary>Speaker: Dr. M. Flanagan Abstract: Doubly-generalized low-density parity-check (D-GLDPC) codes offer an attractive compromise between algebraic and random code design philosophies. In this talk we introduce the concept of D-GLDPC codes,and then provide a solution for the asymptotic growth rate of the weight distribution of any D-GLDPC ensemble. This tool is then used for detailed analysis of a case study, namely, a rate-1/2 D-GLDPC ensemble where all the check nodes are (7,4) Hamming codes and all the variable nodes are length-7 single parity-check codes. It is illustrated how the variable node representations can heavily affect the code properties and how different variable node representations can be combined within the same graph to enhance some of the code parameters. The analysis is conducted over the binary erasure channel. Interesting features of the new codes include the capability of achieving a good compromise between waterfall and error floor performance while preserving graphical regularity, and values of threshold outperforming LDPC counterparts.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Asymptotic Properties of Volterra Equations</title>
	<link>http://www.hamilton.ie/seminars/videos/14-e_velasco_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/14-e_velasco_hi.mp4</guid>
	<pubDate>Mon, 17 Aug 2009 00:00:14 +0100</pubDate>
	<description>Speaker:

Prof. E.C. Velasco


Abstract:

Volterra integral and difference equations may be used to model the dynamics of physical systems (viscoelasticity, motion of bodies with reference to hereditary) and biological systems (populations dynamics, biomechanics).  In this talk we discuss about asymptotic properties of solutions of both, Volterra integral and Volterra difference equations. For the Volterra difference equations, we derive stability conditions based on the direct Lyapunov method and present some examples to illustrate them.</description>
	<itunes:author>Prof. E.C. Velasco</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>53:57</itunes:duration>
	<enclosure length="335152373" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/14-e_velasco_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. E.C. Velasco Abstract: Volterra integral and difference equations may be used to model the dynamics of physical systems (viscoelasticity, motion of bodies with reference to hereditary) and biological systems (populations dynamics, biomechanics). In this talk we discuss about asymptotic properties of solutions of both, Volterra integral and Volterra difference equations. For the Volterra difference equations, we derive stability conditions based on the direct Lyapunov method and present some examples to illustrate them.</itunes:subtitle><itunes:summary>Speaker: Prof. E.C. Velasco Abstract: Volterra integral and difference equations may be used to model the dynamics of physical systems (viscoelasticity, motion of bodies with reference to hereditary) and biological systems (populations dynamics, biomechanics). In this talk we discuss about asymptotic properties of solutions of both, Volterra integral and Volterra difference equations. For the Volterra difference equations, we derive stability conditions based on the direct Lyapunov method and present some examples to illustrate them.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>On Fair Coexistence of Wireless Networks via CSMA Based Transmission Algorithms</title>
	<link>http://www.hamilton.ie/seminars/videos/13-m_alanyali_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/13-m_alanyali_hi.mp4</guid>
	<pubDate>Thu, 25 Jun 2009 00:00:13 +0100</pubDate>
	<description>Speaker:

Prof. M. Alanyali


Abstract:

This talk will touch on wireless coexistence issues that arise due to higher spatial density of spectrum usage.  We consider a fairness perspective for autonomous scheduling of transmissions by distinct sessions, subject to constraints that are represented by a conflict graph.  The emphasis is on randomized backoff-based CSMA algorithms.  The resulting transmission dynamics is represented by a Markovian model whose analysis suggests practical challenges in fair sharing of spectrum by distinct sessions that subscribe to a common standard, as well as by those that do not possess a common signaling protocol.</description>
	<itunes:author>Prof. M. Alanyali</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:06:36</itunes:duration>
	<enclosure length="424549813" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/13-m_alanyali_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. M. Alanyali Abstract: This talk will touch on wireless coexistence issues that arise due to higher spatial density of spectrum usage. We consider a fairness perspective for autonomous scheduling of transmissions by distinct sessions, subject to constraints that are represented by a conflict graph. The emphasis is on randomized backoff-based CSMA algorithms. The resulting transmission dynamics is represented by a Markovian model whose analysis suggests practical challenges in fair sharing of spectrum by distinct sessions that subscribe to a common standard, as well as by those that do not possess a common signaling protocol.</itunes:subtitle><itunes:summary>Speaker: Prof. M. Alanyali Abstract: This talk will touch on wireless coexistence issues that arise due to higher spatial density of spectrum usage. We consider a fairness perspective for autonomous scheduling of transmissions by distinct sessions, subject to constraints that are represented by a conflict graph. The emphasis is on randomized backoff-based CSMA algorithms. The resulting transmission dynamics is represented by a Markovian model whose analysis suggests practical challenges in fair sharing of spectrum by distinct sessions that subscribe to a common standard, as well as by those that do not possess a common signaling protocol.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>How to understand the cell by breaking it — computational inference of cellular networks from gene perturbation screens</title>
	<link>http://www.hamilton.ie/seminars/videos/11-f_markowetz_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/11-f_markowetz_hi.mp4</guid>
	<pubDate>Thu, 11 Jun 2009 00:00:11 +0100</pubDate>
	<description>Speaker:

Dr. F. Markowetz


Abstract:

Cellular mechanisms are driven by interactions between proteins, DNA and RNA, working together in cellular pathways. Current knowledge of information flow in the cell is still very incomplete and dissection of cellular pathways is one of the major challenges of systems biology. Computational approaches integrating heterogeneous genomic data sources into one joint model promise a comprehensive view on cellular processes. However, to be successful, computational methods need to account for the specific features of each data source. In this talk I will focus on data from gene perturbation experiments, where individual pathway members are experimentally silenced and effects of these perturbations are measured in genomic assays. I will describe Nested Effects Models, a probabilistic graphical model especially designed to reconstruct signaling pathways from gene perturbation data.</description>
	<itunes:author>Dr. F. Markowetz</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>48:31</itunes:duration>
	<enclosure length="286437720" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/11-f_markowetz_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. F. Markowetz Abstract: Cellular mechanisms are driven by interactions between proteins, DNA and RNA, working together in cellular pathways. Current knowledge of information flow in the cell is still very incomplete and dissection of cellular pathways is one of the major challenges of systems biology. Computational approaches integrating heterogeneous genomic data sources into one joint model promise a comprehensive view on cellular processes. However, to be successful, computational methods need to account for the specific features of each data source. In this talk I will focus on data from gene perturbation experiments, where individual pathway members are experimentally silenced and effects of these perturbations are measured in genomic assays. I will describe Nested Effects Models, a probabilistic graphical model especially designed to reconstruct signaling pathways from gene perturbation data.</itunes:subtitle><itunes:summary>Speaker: Dr. F. Markowetz Abstract: Cellular mechanisms are driven by interactions between proteins, DNA and RNA, working together in cellular pathways. Current knowledge of information flow in the cell is still very incomplete and dissection of cellular pathways is one of the major challenges of systems biology. Computational approaches integrating heterogeneous genomic data sources into one joint model promise a comprehensive view on cellular processes. However, to be successful, computational methods need to account for the specific features of each data source. In this talk I will focus on data from gene perturbation experiments, where individual pathway members are experimentally silenced and effects of these perturbations are measured in genomic assays. I will describe Nested Effects Models, a probabilistic graphical model especially designed to reconstruct signaling pathways from gene perturbation data.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Multivariate Time Series Analysis in Neurology</title>
	<link>http://www.hamilton.ie/seminars/videos/10-b_schelter_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/10-b_schelter_hi.mp4</guid>
	<pubDate>Wed, 06 May 2009 00:00:10 +0100</pubDate>
	<description>Speaker:

Dr. Björn Schelter


Abstract:

Nowadays, data are recorded with increasing spatio as well as temporal resolution. This calls for new methods to analyze these data sets. Caused by the high spatio as well as temporal resolution of the recorded signals, inference of the causal network structure underlying them becomes feasible. In many applications a detailed analysis of these networks allows deeper insights into the normal functioning or malfunctioning of the system. In Neurology this helps to understand certain diseases like epilepsy or Parkinsons disease.&#13;&#13;Novel concepts to analyze multivariate data consisting of both time series as well as point processes will be presented. By means of an application to tremor in Parkinsons disease, the abilities and limitations of these techniques are discussed.</description>
	<itunes:author>Dr. Björn Schelter</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>55:25</itunes:duration>
	<enclosure length="341802011" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/10-b_schelter_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. Björn Schelter Abstract: Nowadays, data are recorded with increasing spatio as well as temporal resolution. This calls for new methods to analyze these data sets. Caused by the high spatio as well as temporal resolution of the recorded signals, inference of the causal network structure underlying them becomes feasible. In many applications a detailed analysis of these networks allows deeper insights into the normal functioning or malfunctioning of the system. In Neurology this helps to understand certain diseases like epilepsy or Parkinsons disease. Novel concepts to analyze multivariate data consisting of both time series as well as point processes will be presented. By means of an application to tremor in Parkinsons disease, the abilities and limitations of these techniques are discussed.</itunes:subtitle><itunes:summary>Speaker: Dr. Björn Schelter Abstract: Nowadays, data are recorded with increasing spatio as well as temporal resolution. This calls for new methods to analyze these data sets. Caused by the high spatio as well as temporal resolution of the recorded signals, inference of the causal network structure underlying them becomes feasible. In many applications a detailed analysis of these networks allows deeper insights into the normal functioning or malfunctioning of the system. In Neurology this helps to understand certain diseases like epilepsy or Parkinsons disease. Novel concepts to analyze multivariate data consisting of both time series as well as point processes will be presented. By means of an application to tremor in Parkinsons disease, the abilities and limitations of these techniques are discussed.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Probabilistic Interaction Networks</title>
	<link>http://www.hamilton.ie/seminars/videos/09-r_kulhavy_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/09-r_kulhavy_hi.mp4</guid>
	<pubDate>Wed, 29 Apr 2009 00:00:09 +0100</pubDate>
	<description>Speaker:

Dr. Rudolf Kulhavý


Abstract:

There is a common perception in todays business that the world around us becomes less hierarchical and more networked and flat. While the shift towards a networked and decentralised business environment generally creates more freedom to act, it does not increase automatically the chances of success. Understanding the dynamics of networked systems — in particular the interplay between the performance of an individual node and of the entire network, and the importance of effective bonding for the well-being of an organisation — becomes a critical skill. Replacing mental models with a formal, quantitative model can improve such understanding and ultimately allow for systematic network optimisation. To this end, we propose to combine stochastic system dynamics modelling of individual nodes with probabilistic graphical modelling of a network configuration. The latter is closely related to theoretical constructs such as the Ising model in statistical mechanics or Markov random fields in image analysis. Modelling of value networks in business turns out to be even more complex because of the random structure of a network. In this talk, we discuss the economic substance and mathematical representation of node-to-node bonds, formulate a general Bayesian solution to the problem of estimating unknown state and parameter values in the resulting model, and discuss its Markov chain Monte Carlo implementation. To illustrate the concepts introduced, we revisit Clayton Christensens qualitative model of the dynamic behaviour of new entrants versus incumbents when dealing with sustaining and disruptive innovation — and consider its reformulation as a probabilistic interaction network. We conclude by looking outside business for other instances of value networks.</description>
	<itunes:author>Dr. Rudolf Kulhavý</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:04:01</itunes:duration>
	<enclosure length="377032179" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/09-r_kulhavy_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. Rudolf Kulhavý Abstract: There is a common perception in todays business that the world around us becomes less hierarchical and more networked and flat. While the shift towards a networked and decentralised business environment generally creates more freedom to act, it does not increase automatically the chances of success. Understanding the dynamics of networked systems — in particular the interplay between the performance of an individual node and of the entire network, and the importance of effective bonding for the well-being of an organisation — becomes a critical skill. Replacing mental models with a formal, quantitative model can improve such understanding and ultimately allow for systematic network optimisation. To this end, we propose to combine stochastic system dynamics modelling of individual nodes with probabilistic graphical modelling of a network configuration. The latter is closely related to theoretical constructs such as the Ising model in statistical mechanics or Markov random fields in image analysis. Modelling of value networks in business turns out to be even more complex because of the random structure of a network. In this talk, we discuss the economic substance and mathematical representation of node-to-node bonds, formulate a general Bayesian solution to the problem of estimating unknown state and parameter values in the resulting model, and discuss its Markov chain Monte Carlo implementation. To illustrate the concepts introduced, we revisit Clayton Christensens qualitative model of the dynamic behaviour of new entrants versus incumbents when dealing with sustaining and disruptive innovation — and consider its reformulation as a probabilistic interaction network. We conclude by looking outside business for other instances of value networks.</itunes:subtitle><itunes:summary>Speaker: Dr. Rudolf Kulhavý Abstract: There is a common perception in todays business that the world around us becomes less hierarchical and more networked and flat. While the shift towards a networked and decentralised business environment generally creates more freedom to act, it does not increase automatically the chances of success. Understanding the dynamics of networked systems — in particular the interplay between the performance of an individual node and of the entire network, and the importance of effective bonding for the well-being of an organisation — becomes a critical skill. Replacing mental models with a formal, quantitative model can improve such understanding and ultimately allow for systematic network optimisation. To this end, we propose to combine stochastic system dynamics modelling of individual nodes with probabilistic graphical modelling of a network configuration. The latter is closely related to theoretical constructs such as the Ising model in statistical mechanics or Markov random fields in image analysis. Modelling of value networks in business turns out to be even more complex because of the random structure of a network. In this talk, we discuss the economic substance and mathematical representation of node-to-node bonds, formulate a general Bayesian solution to the problem of estimating unknown state and parameter values in the resulting model, and discuss its Markov chain Monte Carlo implementation. To illustrate the concepts introduced, we revisit Clayton Christensens qualitative model of the dynamic behaviour of new entrants versus incumbents when dealing with sustaining and disruptive innovation — and consider its reformulation as a probabilistic interaction network. We conclude by looking outside business for other instances of value networks.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Counting &amp; Sampling Contingency Tables</title>
	<link>http://www.hamilton.ie/seminars/videos/08-m_cryan_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/08-m_cryan_hi.mp4</guid>
	<pubDate>Wed, 22 Apr 2009 00:00:08 +0100</pubDate>
	<description>Speaker:

Dr. M. Cryan


Abstract:

Suppose we are given two lists r and c of positive integers, where r=(r[1],...., r[m]) represents a list of prescribed row sums and c=(c[1], ..., c[n]) is a list of prescribed column sums. We require that (r[1] + ... + r[m]) =(c[1] + ... + c[n]). In this setting, we say that a m-by-n matrix X of non-negative integers is a Contingency Table (for these given row/column values) if X simultaneously satisfies all of the given row and column sums. The problem of determining whether at least one contingency table exists can be solved in polynomial-time (in fact, this question is fairly trivial).&#13;&#13;In my talk, we are interested in the more-difficult problem of randomly sampling a table uniformly at random, from the entire set of contingency tables. This problem has some applications in practical statistics which I will mention. We study a very natural Markov chain on the set of contingency tables called the 2-by-2 heat bath: one step of this chain operates by selecting 2 rows and 2 columns uniformly at random, computing the induced row sums and column sums on this 2-by-2 window, then replacing the window with a table chosen randomly from all 2-by-2 tables with the induced row and column sums. This Markov chain converges to the uniform distribution on contingency tables - our goal is to show that it approaches uniformity within polynomial-time. We are able to achieve this result for the case when the number of rows m is some fixed constant. Our proof is by application of the canonical paths method of Jerrum and Sinclair.&#13;&#13;(Joint work with Martin Dyer, Leslie Goldberg, Mark Jerrum and Russell Martin)</description>
	<itunes:author>Dr. M. Cryan</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:01:27</itunes:duration>
	<enclosure length="378752434" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/08-m_cryan_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. M. Cryan Abstract: Suppose we are given two lists r and c of positive integers, where r=(r[1],...., r[m]) represents a list of prescribed row sums and c=(c[1], ..., c[n]) is a list of prescribed column sums. We require that (r[1] + ... + r[m]) =(c[1] + ... + c[n]). In this setting, we say that a m-by-n matrix X of non-negative integers is a Contingency Table (for these given row/column values) if X simultaneously satisfies all of the given row and column sums. The problem of determining whether at least one contingency table exists can be solved in polynomial-time (in fact, this question is fairly trivial). In my talk, we are interested in the more-difficult problem of randomly sampling a table uniformly at random, from the entire set of contingency tables. This problem has some applications in practical statistics which I will mention. We study a very natural Markov chain on the set of contingency tables called the 2-by-2 heat bath: one step of this chain operates by selecting 2 rows and 2 columns uniformly at random, computing the induced row sums and column sums on this 2-by-2 window, then replacing the window with a table chosen randomly from all 2-by-2 tables with the induced row and column sums. This Markov chain converges to the uniform distribution on contingency tables - our goal is to show that it approaches uniformity within polynomial-time. We are able to achieve this result for the case when the number of rows m is some fixed constant. Our proof is by application of the canonical paths method of Jerrum and Sinclair. (Joint work with Martin Dyer, Leslie Goldberg, Mark Jerrum and Russell Martin)</itunes:subtitle><itunes:summary>Speaker: Dr. M. Cryan Abstract: Suppose we are given two lists r and c of positive integers, where r=(r[1],...., r[m]) represents a list of prescribed row sums and c=(c[1], ..., c[n]) is a list of prescribed column sums. We require that (r[1] + ... + r[m]) =(c[1] + ... + c[n]). In this setting, we say that a m-by-n matrix X of non-negative integers is a Contingency Table (for these given row/column values) if X simultaneously satisfies all of the given row and column sums. The problem of determining whether at least one contingency table exists can be solved in polynomial-time (in fact, this question is fairly trivial). In my talk, we are interested in the more-difficult problem of randomly sampling a table uniformly at random, from the entire set of contingency tables. This problem has some applications in practical statistics which I will mention. We study a very natural Markov chain on the set of contingency tables called the 2-by-2 heat bath: one step of this chain operates by selecting 2 rows and 2 columns uniformly at random, computing the induced row sums and column sums on this 2-by-2 window, then replacing the window with a table chosen randomly from all 2-by-2 tables with the induced row and column sums. This Markov chain converges to the uniform distribution on contingency tables - our goal is to show that it approaches uniformity within polynomial-time. We are able to achieve this result for the case when the number of rows m is some fixed constant. Our proof is by application of the canonical paths method of Jerrum and Sinclair. (Joint work with Martin Dyer, Leslie Goldberg, Mark Jerrum and Russell Martin)</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>ClubADSL: Enhancing Bandwidth Aggregation in your Neighborhood</title>
	<link>http://www.hamilton.ie/seminars/videos/07-d_giustiniano_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/07-d_giustiniano_hi.mp4</guid>
	<pubDate>Fri, 20 Feb 2009 00:00:07 +0000</pubDate>
	<description>Speaker:

Dr. D. Giustiniano


Abstract:

ADSL is becoming the standard form of residential and small-business broadband access to the Internet due, primarily, to its low deployment cost. These ADSL residential lines are often deployed with Access Points (AP) that provide wireless connectivity. While the ADSL technology has showed evident limits in terms of capacity, the short-range wireless communication can guarantee a similar or higher capacity. Even more important, it is often possible for a residential wireless client to be in range of several other APs belonging to nearby neighbors with ADSL connections. Therefore, it is possible for a wireless client to simultaneously connect to several APs in range and effectively aggregate their available ADSL bandwidth. Recent works have shown promising results within this area, but main important questions are still unresolved:i) how can we guarantee a fair distributed bandwidth allocation among clients? ii) how the latency of TCP connection can be affected by AP connections over multiple frequencies? iii) how can we minimize the MAC cost of managing these multiple APs? In order to answer to these questions, we introduce ClubADSL, a prototype wireless client that can aggregate the capacity of multi-frequency APs. ClubADSL achieves fairness through distributed pressure schemes and minimizes the impact of end-to-end latency on the system performance with a resource allocation scheme based on Access-Point slot assignment. We show the feasibility of such a system in seamlessly transmitting TCP traffic, and validate its experimental implementation over commodity hardware in controlled scenarios. [Joint Work with Alberto Lopez, Eduard Goma, Julian Morillo,Pablo Rodriguez].</description>
	<itunes:author>Dr. D. Giustiniano</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>59:00</itunes:duration>
	<enclosure length="370407004" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/07-d_giustiniano_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. D. Giustiniano Abstract: ADSL is becoming the standard form of residential and small-business broadband access to the Internet due, primarily, to its low deployment cost. These ADSL residential lines are often deployed with Access Points (AP) that provide wireless connectivity. While the ADSL technology has showed evident limits in terms of capacity, the short-range wireless communication can guarantee a similar or higher capacity. Even more important, it is often possible for a residential wireless client to be in range of several other APs belonging to nearby neighbors with ADSL connections. Therefore, it is possible for a wireless client to simultaneously connect to several APs in range and effectively aggregate their available ADSL bandwidth. Recent works have shown promising results within this area, but main important questions are still unresolved:i) how can we guarantee a fair distributed bandwidth allocation among clients? ii) how the latency of TCP connection can be affected by AP connections over multiple frequencies? iii) how can we minimize the MAC cost of managing these multiple APs? In order to answer to these questions, we introduce ClubADSL, a prototype wireless client that can aggregate the capacity of multi-frequency APs. ClubADSL achieves fairness through distributed pressure schemes and minimizes the impact of end-to-end latency on the system performance with a resource allocation scheme based on Access-Point slot assignment. We show the feasibility of such a system in seamlessly transmitting TCP traffic, and validate its experimental implementation over commodity hardware in controlled scenarios. [Joint Work with Alberto Lopez, Eduard Goma, Julian Morillo,Pablo Rodriguez].</itunes:subtitle><itunes:summary>Speaker: Dr. D. Giustiniano Abstract: ADSL is becoming the standard form of residential and small-business broadband access to the Internet due, primarily, to its low deployment cost. These ADSL residential lines are often deployed with Access Points (AP) that provide wireless connectivity. While the ADSL technology has showed evident limits in terms of capacity, the short-range wireless communication can guarantee a similar or higher capacity. Even more important, it is often possible for a residential wireless client to be in range of several other APs belonging to nearby neighbors with ADSL connections. Therefore, it is possible for a wireless client to simultaneously connect to several APs in range and effectively aggregate their available ADSL bandwidth. Recent works have shown promising results within this area, but main important questions are still unresolved:i) how can we guarantee a fair distributed bandwidth allocation among clients? ii) how the latency of TCP connection can be affected by AP connections over multiple frequencies? iii) how can we minimize the MAC cost of managing these multiple APs? In order to answer to these questions, we introduce ClubADSL, a prototype wireless client that can aggregate the capacity of multi-frequency APs. ClubADSL achieves fairness through distributed pressure schemes and minimizes the impact of end-to-end latency on the system performance with a resource allocation scheme based on Access-Point slot assignment. We show the feasibility of such a system in seamlessly transmitting TCP traffic, and validate its experimental implementation over commodity hardware in controlled scenarios. [Joint Work with Alberto Lopez, Eduard Goma, Julian Morillo,Pablo Rodriguez].</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>How I broke AES (Advanced Encryption Standard) — if I did it</title>
	<link>http://www.hamilton.ie/seminars/videos/06-w_smith_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/06-w_smith_hi.mp4</guid>
	<pubDate>Mon, 02 Feb 2009 00:00:06 +0000</pubDate>
	<description>Speaker:

Dr. W. D. Smith


Abstract:

We describe a new simple but more powerful form of linear cryptanalysis.  It appears to break AES (and undoubtedly other cryptosystems too, e.g. SKIPJACK).&#13;*But the break is "nonconstructive".&#13;*Even if this break is broken (due to the underlying models inadequately approximating the real world) we explain how AES still could contain "trapdoors" which would make cryptanalysis unexpectedly easy for anybody who knew the trapdoor.&#13;&#13;We then discuss how to use the theory of BLECCs to build cryptosystems provably&#13;*not containing trapdoors of this sort,&#13;*secure against our strengthened form of linear cryptanalysis,&#13;*secure against "differential" cryptanalysis,&#13;*secure against D.J. Bernstein's timing attack.&#13;&#13;Using this technique we prove a fundamental theorem: it is possible to thus encrypt N bits with security 2^(cN), via an circuit Q_N containing &lt;= cN two-input logic gates and operating in &lt;= c log(N) gate-delays, where Q_N is constructible in polynomial (in N) time.</description>
	<itunes:author>Dr. W. D. Smith</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:04:31</itunes:duration>
	<enclosure length="386746844" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/06-w_smith_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Dr. W. D. Smith Abstract: We describe a new simple but more powerful form of linear cryptanalysis. It appears to break AES (and undoubtedly other cryptosystems too, e.g. SKIPJACK). *But the break is "nonconstructive". *Even if this break is broken (due to the underlying models inadequately approximating the real world) we explain how AES still could contain "trapdoors" which would make cryptanalysis unexpectedly easy for anybody who knew the trapdoor. We then discuss how to use the theory of BLECCs to build cryptosystems provably *not containing trapdoors of this sort, *secure against our strengthened form of linear cryptanalysis, *secure against "differential" cryptanalysis, *secure against D.J. Bernstein's timing attack. Using this technique we prove a fundamental theorem: it is possible to thus encrypt N bits with security 2^(cN), via an circuit Q_N containing &lt;= cN two-input logic gates and operating in &lt;= c log(N) gate-delays, where Q_N is constructible in polynomial (in N) time.</itunes:subtitle><itunes:summary>Speaker: Dr. W. D. Smith Abstract: We describe a new simple but more powerful form of linear cryptanalysis. It appears to break AES (and undoubtedly other cryptosystems too, e.g. SKIPJACK). *But the break is "nonconstructive". *Even if this break is broken (due to the underlying models inadequately approximating the real world) we explain how AES still could contain "trapdoors" which would make cryptanalysis unexpectedly easy for anybody who knew the trapdoor. We then discuss how to use the theory of BLECCs to build cryptosystems provably *not containing trapdoors of this sort, *secure against our strengthened form of linear cryptanalysis, *secure against "differential" cryptanalysis, *secure against D.J. Bernstein's timing attack. Using this technique we prove a fundamental theorem: it is possible to thus encrypt N bits with security 2^(cN), via an circuit Q_N containing &lt;= cN two-input logic gates and operating in &lt;= c log(N) gate-delays, where Q_N is constructible in polynomial (in N) time.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Router Buffer Sizing Revisited: The Role of the Output/Input Capacity Ratio</title>
	<link>http://www.hamilton.ie/seminars/videos/05-c_dovrolis_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/05-c_dovrolis_hi.mp4</guid>
	<pubDate>Mon, 13 Oct 2008 00:00:05 +0100</pubDate>
	<description>Speaker:

Prof. C. Dovrolis


Abstract:

The issue of router buffer sizing is still open and significant.  Previous work either considers open-loop traffic or only analyzes persistent TCP flows.  Our work differs in two ways.  First, it considers the more realistic case of non-persistent TCP flows with heavy-tailed size distribution.  Second, instead of only looking at link metrics, we focus on the impact of buffer sizing on TCP performance.  Through a combination of test bed experiments, simulation, and analysis, we reach the following conclusions:  The output/input capacity ratio at a network link largely determines the drops exponentially with the buffer size and the optimal buffer size is close to zero.  Otherwise, if the output/input capacity ratio is lower than one, the loss rate follows a power-law reduction with the buffer size and significant buffering is needed, especially with flows that are mostly in congestion-avoidance.  Smaller transfers, which are mostly in slow-start, require significantly smaller buffers. We conclude by revisiting the ongoing debate on "small versus large" buffers from a new perspective.</description>
	<itunes:author>Prof. C. Dovrolis</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>56:33</itunes:duration>
	<enclosure length="338454986" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/05-c_dovrolis_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. C. Dovrolis Abstract: The issue of router buffer sizing is still open and significant. Previous work either considers open-loop traffic or only analyzes persistent TCP flows. Our work differs in two ways. First, it considers the more realistic case of non-persistent TCP flows with heavy-tailed size distribution. Second, instead of only looking at link metrics, we focus on the impact of buffer sizing on TCP performance. Through a combination of test bed experiments, simulation, and analysis, we reach the following conclusions: The output/input capacity ratio at a network link largely determines the drops exponentially with the buffer size and the optimal buffer size is close to zero. Otherwise, if the output/input capacity ratio is lower than one, the loss rate follows a power-law reduction with the buffer size and significant buffering is needed, especially with flows that are mostly in congestion-avoidance. Smaller transfers, which are mostly in slow-start, require significantly smaller buffers. We conclude by revisiting the ongoing debate on "small versus large" buffers from a new perspective.</itunes:subtitle><itunes:summary>Speaker: Prof. C. Dovrolis Abstract: The issue of router buffer sizing is still open and significant. Previous work either considers open-loop traffic or only analyzes persistent TCP flows. Our work differs in two ways. First, it considers the more realistic case of non-persistent TCP flows with heavy-tailed size distribution. Second, instead of only looking at link metrics, we focus on the impact of buffer sizing on TCP performance. Through a combination of test bed experiments, simulation, and analysis, we reach the following conclusions: The output/input capacity ratio at a network link largely determines the drops exponentially with the buffer size and the optimal buffer size is close to zero. Otherwise, if the output/input capacity ratio is lower than one, the loss rate follows a power-law reduction with the buffer size and significant buffering is needed, especially with flows that are mostly in congestion-avoidance. Smaller transfers, which are mostly in slow-start, require significantly smaller buffers. We conclude by revisiting the ongoing debate on "small versus large" buffers from a new perspective.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Patchy Solutions of Hamilton-Jacobi-Bellman Equations</title>
	<link>http://www.hamilton.ie/seminars/videos/03-a_krener_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/03-a_krener_hi.mp4</guid>
	<pubDate>Fri, 23 May 2008 00:00:03 +0100</pubDate>
	<description>Speaker:

Prof. A. E. Krener


Abstract:

The Hamilton Jacobi Bellman partial differential equation arises in the solution of optimal control problems.  It is a first order, nonlinear, hyperbolic PDE that is very difficult to solve because of the curse of dimensionality.  Moreover the solution may not exist in the classical sense, i.e., the solution may not be differentiable everywhere.  We describe an approach to approximately solve some of these equations on patches where the solution is smooth.</description>
	<itunes:author>Prof. A. E. Krener</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>56:26</itunes:duration>
	<enclosure length="379459693" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/03-a_krener_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. A. E. Krener Abstract: The Hamilton Jacobi Bellman partial differential equation arises in the solution of optimal control problems. It is a first order, nonlinear, hyperbolic PDE that is very difficult to solve because of the curse of dimensionality. Moreover the solution may not exist in the classical sense, i.e., the solution may not be differentiable everywhere. We describe an approach to approximately solve some of these equations on patches where the solution is smooth.</itunes:subtitle><itunes:summary>Speaker: Prof. A. E. Krener Abstract: The Hamilton Jacobi Bellman partial differential equation arises in the solution of optimal control problems. It is a first order, nonlinear, hyperbolic PDE that is very difficult to solve because of the curse of dimensionality. Moreover the solution may not exist in the classical sense, i.e., the solution may not be differentiable everywhere. We describe an approach to approximately solve some of these equations on patches where the solution is smooth.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Passivity-Based Stability Analysis and Applications to Biochemical Reaction Networks</title>
	<link>http://www.hamilton.ie/seminars/videos/02-m_arcak_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/02-m_arcak_hi.mp4</guid>
	<pubDate>Mon, 19 May 2008 00:00:02 +0100</pubDate>
	<description>Speaker:

Prof. M. Arcak


Abstract:

The passivity concept - an abstraction of energy conservation and dissipation in physical systems - has been instrumental in feedback control theory and led to breakthroughs in nonlinear and adaptive control design. In this talk we discuss the use of passivity as a stability test for classes of biochemical reaction networks. The main result determines global asymptotic stability of the network from the diagonal stability of a dissipativity matrix which incorporates information about the passivity properties of the subsystems, the interconnection structure of the network, and the signs of the feedback terms. This stability test encompasses the well-known 'secant criterion' for cyclic networks and extends it to general interconnection structures represented by graphs. An extension to reaction-diffusion PDEs is also discussed. The results are illustrated on MAPK cascade models and on branched interconnection structures motivated by metabolic networks.</description>
	<itunes:author>Prof. M. Arcak</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>48:53</itunes:duration>
	<enclosure length="324082349" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/02-m_arcak_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. M. Arcak Abstract: The passivity concept - an abstraction of energy conservation and dissipation in physical systems - has been instrumental in feedback control theory and led to breakthroughs in nonlinear and adaptive control design. In this talk we discuss the use of passivity as a stability test for classes of biochemical reaction networks. The main result determines global asymptotic stability of the network from the diagonal stability of a dissipativity matrix which incorporates information about the passivity properties of the subsystems, the interconnection structure of the network, and the signs of the feedback terms. This stability test encompasses the well-known 'secant criterion' for cyclic networks and extends it to general interconnection structures represented by graphs. An extension to reaction-diffusion PDEs is also discussed. The results are illustrated on MAPK cascade models and on branched interconnection structures motivated by metabolic networks.</itunes:subtitle><itunes:summary>Speaker: Prof. M. Arcak Abstract: The passivity concept - an abstraction of energy conservation and dissipation in physical systems - has been instrumental in feedback control theory and led to breakthroughs in nonlinear and adaptive control design. In this talk we discuss the use of passivity as a stability test for classes of biochemical reaction networks. The main result determines global asymptotic stability of the network from the diagonal stability of a dissipativity matrix which incorporates information about the passivity properties of the subsystems, the interconnection structure of the network, and the signs of the feedback terms. This stability test encompasses the well-known 'secant criterion' for cyclic networks and extends it to general interconnection structures represented by graphs. An extension to reaction-diffusion PDEs is also discussed. The results are illustrated on MAPK cascade models and on branched interconnection structures motivated by metabolic networks.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>


<item>
	<title>Input-to-State Stability of Differential Inclusions with Application to Hysteretic Feedback Systems</title>
	<link>http://www.hamilton.ie/seminars/videos/01-e_p_ryan_hi.mp4</link>
	<guid>http://www.hamilton.ie/seminars/videos/01-e_p_ryan_hi.mp4</guid>
	<pubDate>Thu, 15 May 2008 00:00:01 +0100</pubDate>
	<description>Speaker:

Prof. E. P. Ryan


Abstract:

Input-to state stability is a concept that captures "nice" properties of dynamical systems with input (e.g.bounded input implies bounded state, input "eventually small" implies state "eventually small", input convergent to zero implies state convergent to zero).  Input-to-state stability (ISS) of a class of differential inclusions is described.  Every system in the class is of Lur'e-type: a feedback interconnection of a linear system and a (set-valued) nonlinearity.  Applications of the ISS results, in the context of feedback interconnections with a hysteresis operator in the feedback path, are developed.</description>
	<itunes:author>Prof. E. P. Ryan</itunes:author>
	<itunes:explicit>no</itunes:explicit>
	<itunes:duration>1:02:04</itunes:duration>
	<enclosure length="412555045" type="video/m4v" url="http://www.hamilton.ie/seminars/videos/01-e_p_ryan_hi.mp4"/>
<author>florian@knorn.org (Hamilton Institute)</author><itunes:subtitle>Speaker: Prof. E. P. Ryan Abstract: Input-to state stability is a concept that captures "nice" properties of dynamical systems with input (e.g.bounded input implies bounded state, input "eventually small" implies state "eventually small", input convergent to zero implies state convergent to zero). Input-to-state stability (ISS) of a class of differential inclusions is described. Every system in the class is of Lur'e-type: a feedback interconnection of a linear system and a (set-valued) nonlinearity. Applications of the ISS results, in the context of feedback interconnections with a hysteresis operator in the feedback path, are developed.</itunes:subtitle><itunes:summary>Speaker: Prof. E. P. Ryan Abstract: Input-to state stability is a concept that captures "nice" properties of dynamical systems with input (e.g.bounded input implies bounded state, input "eventually small" implies state "eventually small", input convergent to zero implies state convergent to zero). Input-to-state stability (ISS) of a class of differential inclusions is described. Every system in the class is of Lur'e-type: a feedback interconnection of a linear system and a (set-valued) nonlinearity. Applications of the ISS results, in the context of feedback interconnections with a hysteresis operator in the feedback path, are developed.</itunes:summary><itunes:keywords>Seminars,Talks,Presentations,Hamilton,Institute</itunes:keywords></item>

</channel>
</rss>