Research Publications
Note: This list is reverse chronological order by publication year. For CS/math papers, authors are listed in alphabetical order. For bioinformatics papers, authors are listed in rough order of contribution.

Identification of genomewide noncanonical spliced regions and analysis of biological functions for spliced sequences using ReadSplitFly
 Authors: Yongsheng Bai, Jeff Kinne, Lizhong Ding, Ethan Rath, Aaron Cox, Siva Naidu and Youping Deng.
 Accepted to appear in BMC Bioinformatics pending revisions.
 To be presented at: 2016 ICIBM.
 Slides (target audience  CS undergrads): google slides

ReadSplitRun: An improved bioinformatics pipeline for identification of genomewide noncanonical spliced regions using RNASeq data
 Authors: Yongsheng Bai, Jeff Kinne, Brandon Donham, Feng Jiang, Lizhong Ding, Justin R. Hassler, and Randal J. Kaufman.
 In BMC Genomics 17(Suppl 7):503, 2016.
 Presented at the 2015 ICIBM.
 Full text online: here

Ordering with precedence constraints and budget minimization
 Authors: Jeff Kinne, Jan Manuch, Akbar Rafiey, Arash Rafiey.
 In submission, 20 pages, 2015.
 Full draft (arXiv),
Abstract,
Abstract: We introduce a variation of the scheduling with precedence constraints problem that has applications to molecular folding and production management. We are given a bipartite graph H=(B,S). Vertices in B are thought of as goods or services that must be bought to produce items in S that are to be sold. An edge from $ j$ in S to $i$ in B indicates that the production of $j$ requires the purchase of $i$. Each vertex in B has a cost, and each vertex in S results in some gain. The goal is to obtain an ordering of B union S that respects the precedence constraints and maximizes the minimal net profit encountered as the vertices are processed. We call this optimal value the budget or capital investment required for the bipartite graph, and refer to our problem as the bipartite graph ordering problem.
The problem is equivalent to a version of an NPcomplete molecular folding problem that has been studied recently [14]. Work on the molecular folding problem has focused on heuristic algorithms and exponentialtime exact algorithms for the unweighted problem where costs are 1 or 1 and when restricted to graphs arising from RNA folding.
The bipartite graph present work seeks exact algorithms for solving the bipartite ordering problem. We demonstrate an algorithm that computes the optimal ordering in time $O^*(2^n)$ when $n$ is the number of vertices in the input bipartite graph. Our main result is a general strategy that can be used to find an optimal ordering in polynomial time for bipartite graphs that satisfy certain properties. We apply the technique to a variety of graph classes, obtaining polynomialtime solutions to the bipartite graph ordering problem for bipartite graphs that are convex, trivially perfect, cobipartite graphs, and trees. One of our ultimate goals is to completely characterize the classes of graphs for which the problem can be solved exactly in polynomial time.

Lower Bounds against WeaklyUniform Threshold Circuits.
 Authors: Ruiwen Chen, Valentine Kabanets, and Jeff Kinne.
 In special issue of Algorithmica for invited papers from the the 18th International Computing and Combinatorics Conference (COCOON), Volume 17, issue 1, pages 4775, 2014, doi:10.1007/s004530139823y.
 Full version of "On TC^{0} Lower Bounds for the Permanent" with some new proofs and results.

PDF of draft,
ECCC report,
Abstract,
Official
version on springer.com.
Abstract: A family of Boolean circuits {Cn}_{n>0} is called γ(n)weaklyuniform if there is a polynomialtime algorithm for deciding the direct connection language of every Cn, given advice of size γ(n). This is a relaxation of the usual notion of uniformity, which allows one to interpolate between complete uniformity (when γ(n) = 0) and complete nonuniformity (when γ(n) > Cn). Weak uniformity is essentially equivalent to succinctness introduced by Jansen and Santhanam [JS11]. Our main result is that Permanent is not computable by polynomial size n^{o(1)}weaklyuniform TC^{0} circuits. This strengthens the results by Allender [All99] (for uniform TC^{0}) and by Jansen and Santhanam [JS11] (for weaklyuniform arithmetic circuits of constant depth). Our approach is quite general, and can be used to extend to the "weaklyuniform" setting all currently known circuit lower bounds proved for the "uniform" setting. For example, we show that Permanent is not computable by polynomialsize (log n)^{O(1)}weaklyuniform threshold circuits of depth o(log log n), generalizing the result by Koiran and Perifel [KP09].

On TC^{0} Lower Bounds for the Permanent.
 Authors: Jeff Kinne.
 In Proceedings of the 18th International Computing and Combinatorics Conference (COCOON), LNCS 7434, pages 420432, 2012.

PDF,
Abstract,
Conference Slides (PDF).
Abstract: In this paper we consider the problem of proving lower bounds for the permanent. An ongoing line of research has shown superpolynomial lower bounds for slightlynonuniform smalldepth threshold and arithmetic circuits [All99, KP09, JS11, JS12]. We prove a new parameterized lower bound that includes each of the previous results as subcases. Our main result implies that the permanent does not have Boolean threshold circuits of the following kinds.
 Depth O(1), polylog(n) bits of nonuniformity, and size s(n) such that for all constants c, s^{(c)}(n) < 2^{n}. The size s must satisfy another technical condition that is true of functions normally dealt with (such as compositions of polynomials, logarithms, and exponentials).
 Depth o(log log n), polylog(n) bits of nonuniformity, and size n^{O(1)}.
 Depth O(1), n^{o(1)} bits of nonuniformity, and size n^{O(1)}.

On Derandomization and AverageCase Complexity of Monotone Functions.
 Authors: George Karakostas, Jeff Kinne, Dieter van Melkebeek
 In Theoretical Computer Science, volume 434, pages 3544, 2012.

PDF,
Abstract.
Abstract: We investigate whether circuit lower bounds for monotone circuits can be used to derandomize randomized monotone circuits. We show that, in fact, any derandomization of randomized monotone computations would derandomize all randomized computations, whether monotone or not. We prove similar results in the settings of pseudorandom generators and averagecase hard functions  that a pseudorandom generator secure against monotone circuits is also secure with somewhat weaker parameters against general circuits, and that an averagecase hard function for monotone circuits is also hard with somewhat weaker parameters for general circuits.

Pseudorandom Generators, TypicallyCorrect Derandomization, and Circuit
Lower Bounds.
 Authors: Jeff Kinne, Dieter van Melkebeek, and Ronen Shaltiel.
 In the special issue of Computational Complexity for selected papers from The 13th International Workshop on Randomization and Computation (RANDOM), Volume 21, number 1, pages 361, 2012.
 Full version of "Pseudorandom Generators and TypicallyCorrect Derandomization" with some new proofs and results.

PDF,
ECCC report,
Abstract.
Abstract: The area of derandomization attempts to provide efficient deterministic simulations of randomized algorithms in various algorithmic settings. Goldreich and Wigderson introduced a notion of "typicallycorrect" deterministic simulations, which are allowed to err on few inputs. In this paper we further the study of typicallycorrect derandomization in two ways.
First, we develop a generic approach for constructing typicallycorrect derandomizations based on seedextending pseudorandom generators, which are pseudorandom generators that reveal their seed. We use our approach to obtain both conditional and unconditional typicallycorrect derandomization results in various algorithmic settings. We show that our technique strictly generalizes an earlier approach by Shaltiel based on randomness extractors, and simplifies the proofs of some known results. We also demonstrate that our approach is applicable in algorithmic settings where earlier work did not apply. For example, we present a typicallycorrect polynomialtime simulation for every language in BPP based on a hardness assumption that is (seemingly) weaker than the ones used in earlier work.
Second, we investigate whether typicallycorrect derandomization of BPP implies circuit lower bounds. Extending the work of Kabanets and Impagliazzo for the zeroerror case, we establish a positive answer for error rates in the range considered by Goldreich and Wigderson. In doing so, we provide a simpler proof of the zeroerror result. Our proof scales better than the original one and does not rely on the result by Impagliazzo, Kabanets, and Wigderson that NEXP having polynomialsize circuits implies that NEXP coincides with EXP.

Deterministic Simulations and Hierarchy Theorems for Randomized Algorithms.
 Authors: Jeff Kinne.
 My Ph.D. dissertation from the Computer Sciences department at the University of WisconsinMadison, 2010.

PDF,
Slides from defense (PDF),
Abstract.
Abstract:
In this dissertation, we present three research directions related to the question whether all randomized algorithms can be derandomized, i.e., simulated by deterministic algorithms with a small loss in efficiency.
TypicallyCorrect Derandomization A recent line of research has considered “typically correct” deterministic simulations of randomized algorithms, which are allowed to err on few inputs. Such derandomizations may be easier to obtain and/or be more efficient than full derandomizations that do not make mistakes. We further the study of typicallycorrect derandomization in two ways.
First, we develop a generic approach for constructing typicallycorrect derandomizations based on seedextending pseudorandom generators, which are pseudorandom generators that reveal their seed. We use our approach to obtain both conditional and unconditional typicallycorrect derandomization results in various algorithmic settings. For example, we present a typicallycorrect polynomialtime simulation for every language in BPP based on a hardness assumption that is weaker than the ones used in earlier work.
Second, we investigate whether typicallycorrect derandomization of BPP implies circuit lower bounds. We establish a positive answer for small error rates and in doing so provide a proof for the zeroerror setting that is simpler and scales better than earlier arguments.
Monotone Computations Short of derandomizing all efficient randomized algorithms, we can ask to derandomize more restricted classes of randomized algorithms. Because a strong connection has been proved between circuit lower bounds and derandomization, and there has been success proving worstcase circuit lower bounds for monotone circuits, randomized monotone computations are a natural candidate to consider. We show that, in fact, any derandomization of randomized monotone computations would derandomize all randomized algorithms, whether monotone or not. We prove similar results in the settings of pseudorandom generators and averagecase hard functions – that a pseudorandom generator secure against monotone circuits is also secure with somewhat weaker parameters against general circuits, and that an averagecase hard function for monotone circuits is also hard with somewhat weaker parameters for general circuits.
Hierarchy Theorems For any computational model, a fundamental question is whether machines with more resources are strictly more powerful than machines with fewer resources. Such results are known as hierarchy theorems. The standard techniques for proving hierarchy theorems fail when applied to boundederror randomized machines and for other socalled “semantic” models of computation for which a machine must satisfy some promise to be valid. If all randomized algorithms can be efficiently derandomized in a uniform way, hierarchies for boundederror randomized algorithms would follow from the deterministic hierarchy theorems. But can hierarchies be proved short of proving derandomization?
A recent line of work has made progress by proving time hierarchies for randomized and other semantic models that use one bit of advice. We adapt the techniques to prove results in the setting of space, proving space hierarchy results that are as tight as possible for typical space bounds between logarithmic and linear for randomized and other semantic models that use one bit of advice.

Space Hierarchy Results for Randomized and Other Semantic Models.
 Authors: Jeff Kinne and Dieter van Melkebeek.
 In Computational Complexity volume 19, number 3, 2010, pages 423475.
 Full version of "Space Hierarchy Results for Randomized Models" with some new proofs and results.
 PDF, ECCC report, Abstract.
 There exists a language computable by twosided error randomized
machines using s'(n) space and one bit of advice that is not
computable by twosided error machines using s(n)
space and min(s(n),n) bits of advice.
 There exists a language computable by zerosided error randomized machines in space s'(n) with one bit of advice that is not computable by onesided error randomized machines using s(n) space and min(s(n),n) bits of advice.
Abstract: We prove space hierarchy and separation results for randomized and other semantic models of computation with advice where a machine is only required to behave appropriately when given the correct advice sequence. Previous works on hierarchy and separation theorems for such models focused on time as the resource. We obtain tighter results with space as the resource.
Our main theorems deal with spacebounded randomized machines that always halt, and read as follows. Let s(n) be any spaceconstructible monotone function that is Ω(log n) and let s'(n) be any function such that s'(n) = ω(s(n+as(n))) for all constants a.
We also obtain results that apply to generic semantic models of computation.
 Pseudorandom Generators and TypicallyCorrect Derandomization.
 Space Hierarchy Results for Randomized Models.