Research Publications
Note: This list is reverse chronological order by publication year. For CS/math papers, authors are listed in alphabetical order. For bioinformatics papers, authors are listed in rough order of contribution.
-
MetaPlex: An Ion Torrent COI metabarcoding workflow and toolkit to increase experimental efficacy and efficiency
- Authors: Nick Gabry, Jeff Kinne, and Rusty Gonser
- Submitted, in review. manuscript, MetaPlex tool.
-
Disparities in second-generation DNA metabarcoding results exposed with accessible and repeatable workflows
- Authors: Timothy J. Divoll, Veronica A. Brown, Jeff Kinne, Gary F. McCracken and Joy M. O'Keefe.
- Molecular Ecology Resources, 2018. doi 10.1111/1755-0998.12770. abstract, pipeline source
-
Identification of genome-wide non-canonical spliced regions and analysis of biological functions for spliced sequences using Read-Split-Fly
- Authors: Yongsheng Bai, Jeff Kinne, Lizhong Ding, Ethan Rath, Aaron Cox, Siva Naidu and Youping Deng.
- BMC Bioinformatics, 18(Suppl 11):382, 2017.
- Presented at: 2016 ICIBM.
- Slides (target audience - CS undergrads): google slides
-
Read-Split-Run: An improved bioinformatics pipeline for identification of genome-wide non-canonical spliced regions using RNA-Seq data
- Authors: Yongsheng Bai, Jeff Kinne, Brandon Donham, Feng Jiang, Lizhong Ding, Justin R. Hassler, and Randal J. Kaufman.
- In BMC Genomics 17(Suppl 7):503, 2016.
- Presented at the 2015 ICIBM.
- Full text online: here
-
Ordering with precedence constraints and budget minimization
- Authors: Jeff Kinne, Jan Manuch, Akbar Rafiey, Arash Rafiey.
- In submission, 20 pages, 2015.
- Full draft (arXiv),
Abstract,
Abstract: We introduce a variation of the scheduling with precedence constraints problem that has applications to molecular folding and production management. We are given a bipartite graph H=(B,S). Vertices in B are thought of as goods or services that must be bought to produce items in S that are to be sold. An edge from $ j$ in S to $i$ in B indicates that the production of $j$ requires the purchase of $i$. Each vertex in B has a cost, and each vertex in S results in some gain. The goal is to obtain an ordering of B union S that respects the precedence constraints and maximizes the minimal net profit encountered as the vertices are processed. We call this optimal value the budget or capital investment required for the bipartite graph, and refer to our problem as the bipartite graph ordering problem.
The problem is equivalent to a version of an NP-complete molecular folding problem that has been studied recently [14]. Work on the molecular folding problem has focused on heuristic algorithms and exponential-time exact algorithms for the un-weighted problem where costs are 1 or -1 and when restricted to graphs arising from RNA folding.
The bipartite graph present work seeks exact algorithms for solving the bipartite ordering problem. We demonstrate an algorithm that computes the optimal ordering in time $O^*(2^n)$ when $n$ is the number of vertices in the input bipartite graph. Our main result is a general strategy that can be used to find an optimal ordering in polynomial time for bipartite graphs that satisfy certain properties. We apply the technique to a variety of graph classes, obtaining polynomial-time solutions to the bipartite graph ordering problem for bipartite graphs that are convex, trivially perfect, co-bipartite graphs, and trees. One of our ultimate goals is to completely characterize the classes of graphs for which the problem can be solved exactly in polynomial time.
-
Lower Bounds against Weakly-Uniform Threshold Circuits.
- Authors: Ruiwen Chen, Valentine Kabanets, and Jeff Kinne.
- In special issue of Algorithmica for invited papers from the the 18th International Computing and Combinatorics Conference (COCOON), Volume 17, issue 1, pages 47-75, 2014, doi:10.1007/s00453-013-9823-y.
- Full version of "On TC0 Lower Bounds for the Permanent" with some new proofs and results.
-
PDF of draft,
ECCC report,
Abstract,
Official
version on springer.com.
Abstract: A family of Boolean circuits {Cn}n>0 is called γ(n)-weakly-uniform if there is a polynomial-time algorithm for deciding the direct connection language of every Cn, given advice of size γ(n). This is a relaxation of the usual notion of uniformity, which allows one to interpolate between complete uniformity (when γ(n) = 0) and complete nonuniformity (when γ(n) > |Cn|). Weak uniformity is essentially equivalent to succinctness introduced by Jansen and Santhanam [JS11]. Our main result is that Permanent is not computable by polynomial size no(1)-weakly-uniform TC0 circuits. This strengthens the results by Allender [All99] (for uniform TC0) and by Jansen and Santhanam [JS11] (for weakly-uniform arithmetic circuits of constant depth). Our approach is quite general, and can be used to extend to the "weakly-uniform" setting all currently known circuit lower bounds proved for the "uniform" setting. For example, we show that Permanent is not computable by polynomial-size (log n)O(1)-weakly-uniform threshold circuits of depth o(log log n), generalizing the result by Koiran and Perifel [KP09].
-
On TC0 Lower Bounds for the Permanent.
- Authors: Jeff Kinne.
- In Proceedings of the 18th International Computing and Combinatorics Conference (COCOON), LNCS 7434, pages 420-432, 2012.
-
PDF,
Abstract,
Conference Slides (PDF).
Abstract: In this paper we consider the problem of proving lower bounds for the permanent. An ongoing line of research has shown super-polynomial lower bounds for slightly-non-uniform small-depth threshold and arithmetic circuits [All99, KP09, JS11, JS12]. We prove a new parameterized lower bound that includes each of the previous results as sub-cases. Our main result implies that the permanent does not have Boolean threshold circuits of the following kinds.
- Depth O(1), poly-log(n) bits of non-uniformity, and size s(n) such that for all constants c, s(c)(n) < 2n. The size s must satisfy another technical condition that is true of functions normally dealt with (such as compositions of polynomials, logarithms, and exponentials).
- Depth o(log log n), poly-log(n) bits of non-uniformity, and size nO(1).
- Depth O(1), no(1) bits of non-uniformity, and size nO(1).
-
On Derandomization and Average-Case Complexity of Monotone Functions.
- Authors: George Karakostas, Jeff Kinne, Dieter van Melkebeek
- In Theoretical Computer Science, volume 434, pages 35-44, 2012.
-
PDF,
Abstract.
Abstract: We investigate whether circuit lower bounds for monotone circuits can be used to derandomize randomized monotone circuits. We show that, in fact, any derandomization of randomized monotone computations would derandomize all randomized computations, whether monotone or not. We prove similar results in the settings of pseudorandom generators and average-case hard functions -- that a pseudorandom generator secure against monotone circuits is also secure with somewhat weaker parameters against general circuits, and that an average-case hard function for monotone circuits is also hard with somewhat weaker parameters for general circuits.
-
Pseudorandom Generators, Typically-Correct Derandomization, and Circuit
Lower Bounds.
- Authors: Jeff Kinne, Dieter van Melkebeek, and Ronen Shaltiel.
- In the special issue of Computational Complexity for selected papers from The 13th International Workshop on Randomization and Computation (RANDOM), Volume 21, number 1, pages 3-61, 2012.
- Full version of "Pseudorandom Generators and Typically-Correct Derandomization" with some new proofs and results.
-
PDF,
ECCC report,
Abstract.
Abstract: The area of derandomization attempts to provide efficient deterministic simulations of randomized algorithms in various algorithmic settings. Goldreich and Wigderson introduced a notion of "typically-correct" deterministic simulations, which are allowed to err on few inputs. In this paper we further the study of typically-correct derandomization in two ways.
First, we develop a generic approach for constructing typically-correct derandomizations based on seed-extending pseudorandom generators, which are pseudorandom generators that reveal their seed. We use our approach to obtain both conditional and unconditional typically-correct derandomization results in various algorithmic settings. We show that our technique strictly generalizes an earlier approach by Shaltiel based on randomness extractors, and simplifies the proofs of some known results. We also demonstrate that our approach is applicable in algorithmic settings where earlier work did not apply. For example, we present a typically-correct polynomial-time simulation for every language in BPP based on a hardness assumption that is (seemingly) weaker than the ones used in earlier work.
Second, we investigate whether typically-correct derandomization of BPP implies circuit lower bounds. Extending the work of Kabanets and Impagliazzo for the zero-error case, we establish a positive answer for error rates in the range considered by Goldreich and Wigderson. In doing so, we provide a simpler proof of the zero-error result. Our proof scales better than the original one and does not rely on the result by Impagliazzo, Kabanets, and Wigderson that NEXP having polynomial-size circuits implies that NEXP coincides with EXP.
-
Deterministic Simulations and Hierarchy Theorems for Randomized Algorithms.
- Authors: Jeff Kinne.
- My Ph.D. dissertation from the Computer Sciences department at the University of Wisconsin-Madison, 2010.
-
PDF,
Slides from defense (PDF),
Abstract.
Abstract:
In this dissertation, we present three research directions related to the question whether all randomized algorithms can be derandomized, i.e., simulated by deterministic algorithms with a small loss in efficiency.
Typically-Correct Derandomization A recent line of research has considered “typically correct” deterministic simulations of randomized algorithms, which are allowed to err on few inputs. Such derandomizations may be easier to obtain and/or be more efficient than full derandomizations that do not make mistakes. We further the study of typically-correct derandomization in two ways.
First, we develop a generic approach for constructing typically-correct derandomizations based on seed-extending pseudorandom generators, which are pseudorandom generators that reveal their seed. We use our approach to obtain both conditional and unconditional typically-correct derandomization results in various algorithmic settings. For example, we present a typically-correct polynomial-time simulation for every language in BPP based on a hardness assumption that is weaker than the ones used in earlier work.
Second, we investigate whether typically-correct derandomization of BPP implies circuit lower bounds. We establish a positive answer for small error rates and in doing so provide a proof for the zero-error setting that is simpler and scales better than earlier arguments.
Monotone Computations Short of derandomizing all efficient randomized algorithms, we can ask to derandomize more restricted classes of randomized algorithms. Because a strong connection has been proved between circuit lower bounds and derandomization, and there has been success proving worst-case circuit lower bounds for monotone circuits, randomized monotone computations are a natural candidate to consider. We show that, in fact, any derandomization of randomized monotone computations would derandomize all randomized algorithms, whether monotone or not. We prove similar results in the settings of pseudorandom generators and average-case hard functions – that a pseudorandom generator secure against monotone circuits is also secure with somewhat weaker parameters against general circuits, and that an average-case hard function for monotone circuits is also hard with somewhat weaker parameters for general circuits.
Hierarchy Theorems For any computational model, a fundamental question is whether machines with more resources are strictly more powerful than machines with fewer resources. Such results are known as hierarchy theorems. The standard techniques for proving hierarchy theorems fail when applied to bounded-error randomized machines and for other so-called “semantic” models of computation for which a machine must satisfy some promise to be valid. If all randomized algorithms can be efficiently derandomized in a uniform way, hierarchies for bounded-error randomized algorithms would follow from the deterministic hierarchy theorems. But can hierarchies be proved short of proving derandomization?
A recent line of work has made progress by proving time hierarchies for randomized and other semantic models that use one bit of advice. We adapt the techniques to prove results in the setting of space, proving space hierarchy results that are as tight as possible for typical space bounds between logarithmic and linear for randomized and other semantic models that use one bit of advice.
-
Space Hierarchy Results for Randomized and Other Semantic Models.
- Authors: Jeff Kinne and Dieter van Melkebeek.
- In Computational Complexity volume 19, number 3, 2010, pages 423-475.
- Full version of "Space Hierarchy Results for Randomized Models" with some new proofs and results.
- PDF, ECCC report, Abstract.
- There exists a language computable by two-sided error randomized
machines using s'(n) space and one bit of advice that is not
computable by two-sided error machines using s(n)
space and min(s(n),n) bits of advice.
- There exists a language computable by zero-sided error randomized machines in space s'(n) with one bit of advice that is not computable by one-sided error randomized machines using s(n) space and min(s(n),n) bits of advice.
Abstract: We prove space hierarchy and separation results for randomized and other semantic models of computation with advice where a machine is only required to behave appropriately when given the correct advice sequence. Previous works on hierarchy and separation theorems for such models focused on time as the resource. We obtain tighter results with space as the resource.
Our main theorems deal with space-bounded randomized machines that always halt, and read as follows. Let s(n) be any space-constructible monotone function that is Ω(log n) and let s'(n) be any function such that s'(n) = ω(s(n+as(n))) for all constants a.
We also obtain results that apply to generic semantic models of computation.
- Pseudorandom Generators and Typically-Correct Derandomization.
- Space Hierarchy Results for Randomized Models.